Oct 10 22:42:14 np0005480824 kernel: Linux version 5.14.0-621.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025
Oct 10 22:42:14 np0005480824 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 10 22:42:14 np0005480824 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 10 22:42:14 np0005480824 kernel: BIOS-provided physical RAM map:
Oct 10 22:42:14 np0005480824 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 10 22:42:14 np0005480824 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 10 22:42:14 np0005480824 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 10 22:42:14 np0005480824 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 10 22:42:14 np0005480824 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 10 22:42:14 np0005480824 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 10 22:42:14 np0005480824 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 10 22:42:14 np0005480824 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 10 22:42:14 np0005480824 kernel: NX (Execute Disable) protection: active
Oct 10 22:42:14 np0005480824 kernel: APIC: Static calls initialized
Oct 10 22:42:14 np0005480824 kernel: SMBIOS 2.8 present.
Oct 10 22:42:14 np0005480824 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 10 22:42:14 np0005480824 kernel: Hypervisor detected: KVM
Oct 10 22:42:14 np0005480824 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 10 22:42:14 np0005480824 kernel: kvm-clock: using sched offset of 4053613872 cycles
Oct 10 22:42:14 np0005480824 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 10 22:42:14 np0005480824 kernel: tsc: Detected 2799.998 MHz processor
Oct 10 22:42:14 np0005480824 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 10 22:42:14 np0005480824 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 10 22:42:14 np0005480824 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 10 22:42:14 np0005480824 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 10 22:42:14 np0005480824 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 10 22:42:14 np0005480824 kernel: Using GB pages for direct mapping
Oct 10 22:42:14 np0005480824 kernel: RAMDISK: [mem 0x2d858000-0x32c23fff]
Oct 10 22:42:14 np0005480824 kernel: ACPI: Early table checksum verification disabled
Oct 10 22:42:14 np0005480824 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 10 22:42:14 np0005480824 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 22:42:14 np0005480824 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 22:42:14 np0005480824 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 22:42:14 np0005480824 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 10 22:42:14 np0005480824 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 22:42:14 np0005480824 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 22:42:14 np0005480824 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 10 22:42:14 np0005480824 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 10 22:42:14 np0005480824 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 10 22:42:14 np0005480824 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 10 22:42:14 np0005480824 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 10 22:42:14 np0005480824 kernel: No NUMA configuration found
Oct 10 22:42:14 np0005480824 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 10 22:42:14 np0005480824 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 10 22:42:14 np0005480824 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 10 22:42:14 np0005480824 kernel: Zone ranges:
Oct 10 22:42:14 np0005480824 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 10 22:42:14 np0005480824 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 10 22:42:14 np0005480824 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 10 22:42:14 np0005480824 kernel:  Device   empty
Oct 10 22:42:14 np0005480824 kernel: Movable zone start for each node
Oct 10 22:42:14 np0005480824 kernel: Early memory node ranges
Oct 10 22:42:14 np0005480824 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 10 22:42:14 np0005480824 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 10 22:42:14 np0005480824 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 10 22:42:14 np0005480824 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 10 22:42:14 np0005480824 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 10 22:42:14 np0005480824 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 10 22:42:14 np0005480824 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 10 22:42:14 np0005480824 kernel: ACPI: PM-Timer IO Port: 0x608
Oct 10 22:42:14 np0005480824 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 10 22:42:14 np0005480824 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 10 22:42:14 np0005480824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 10 22:42:14 np0005480824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 10 22:42:14 np0005480824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 10 22:42:14 np0005480824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 10 22:42:14 np0005480824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 10 22:42:14 np0005480824 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 10 22:42:14 np0005480824 kernel: TSC deadline timer available
Oct 10 22:42:14 np0005480824 kernel: CPU topo: Max. logical packages:   8
Oct 10 22:42:14 np0005480824 kernel: CPU topo: Max. logical dies:       8
Oct 10 22:42:14 np0005480824 kernel: CPU topo: Max. dies per package:   1
Oct 10 22:42:14 np0005480824 kernel: CPU topo: Max. threads per core:   1
Oct 10 22:42:14 np0005480824 kernel: CPU topo: Num. cores per package:     1
Oct 10 22:42:14 np0005480824 kernel: CPU topo: Num. threads per package:   1
Oct 10 22:42:14 np0005480824 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 10 22:42:14 np0005480824 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 10 22:42:14 np0005480824 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 10 22:42:14 np0005480824 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 10 22:42:14 np0005480824 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 10 22:42:14 np0005480824 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 10 22:42:14 np0005480824 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 10 22:42:14 np0005480824 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 10 22:42:14 np0005480824 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 10 22:42:14 np0005480824 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 10 22:42:14 np0005480824 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 10 22:42:14 np0005480824 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 10 22:42:14 np0005480824 kernel: Booting paravirtualized kernel on KVM
Oct 10 22:42:14 np0005480824 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 10 22:42:14 np0005480824 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 10 22:42:14 np0005480824 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 10 22:42:14 np0005480824 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 10 22:42:14 np0005480824 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 10 22:42:14 np0005480824 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", will be passed to user space.
Oct 10 22:42:14 np0005480824 kernel: random: crng init done
Oct 10 22:42:14 np0005480824 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 10 22:42:14 np0005480824 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 10 22:42:14 np0005480824 kernel: Fallback order for Node 0: 0 
Oct 10 22:42:14 np0005480824 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 10 22:42:14 np0005480824 kernel: Policy zone: Normal
Oct 10 22:42:14 np0005480824 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 10 22:42:14 np0005480824 kernel: software IO TLB: area num 8.
Oct 10 22:42:14 np0005480824 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 10 22:42:14 np0005480824 kernel: ftrace: allocating 49162 entries in 193 pages
Oct 10 22:42:14 np0005480824 kernel: ftrace: allocated 193 pages with 3 groups
Oct 10 22:42:14 np0005480824 kernel: Dynamic Preempt: voluntary
Oct 10 22:42:14 np0005480824 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 10 22:42:14 np0005480824 kernel: rcu: #011RCU event tracing is enabled.
Oct 10 22:42:14 np0005480824 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 10 22:42:14 np0005480824 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct 10 22:42:14 np0005480824 kernel: #011Rude variant of Tasks RCU enabled.
Oct 10 22:42:14 np0005480824 kernel: #011Tracing variant of Tasks RCU enabled.
Oct 10 22:42:14 np0005480824 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 10 22:42:14 np0005480824 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 10 22:42:14 np0005480824 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 10 22:42:14 np0005480824 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 10 22:42:14 np0005480824 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 10 22:42:14 np0005480824 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 10 22:42:14 np0005480824 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 10 22:42:14 np0005480824 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 10 22:42:14 np0005480824 kernel: Console: colour VGA+ 80x25
Oct 10 22:42:14 np0005480824 kernel: printk: console [ttyS0] enabled
Oct 10 22:42:14 np0005480824 kernel: ACPI: Core revision 20230331
Oct 10 22:42:14 np0005480824 kernel: APIC: Switch to symmetric I/O mode setup
Oct 10 22:42:14 np0005480824 kernel: x2apic enabled
Oct 10 22:42:14 np0005480824 kernel: APIC: Switched APIC routing to: physical x2apic
Oct 10 22:42:14 np0005480824 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 10 22:42:14 np0005480824 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Oct 10 22:42:14 np0005480824 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 10 22:42:14 np0005480824 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 10 22:42:14 np0005480824 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 10 22:42:14 np0005480824 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 10 22:42:14 np0005480824 kernel: Spectre V2 : Mitigation: Retpolines
Oct 10 22:42:14 np0005480824 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 10 22:42:14 np0005480824 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 10 22:42:14 np0005480824 kernel: RETBleed: Mitigation: untrained return thunk
Oct 10 22:42:14 np0005480824 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 10 22:42:14 np0005480824 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 10 22:42:14 np0005480824 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 10 22:42:14 np0005480824 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 10 22:42:14 np0005480824 kernel: x86/bugs: return thunk changed
Oct 10 22:42:14 np0005480824 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 10 22:42:14 np0005480824 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 10 22:42:14 np0005480824 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 10 22:42:14 np0005480824 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 10 22:42:14 np0005480824 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 10 22:42:14 np0005480824 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 10 22:42:14 np0005480824 kernel: Freeing SMP alternatives memory: 40K
Oct 10 22:42:14 np0005480824 kernel: pid_max: default: 32768 minimum: 301
Oct 10 22:42:14 np0005480824 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 10 22:42:14 np0005480824 kernel: landlock: Up and running.
Oct 10 22:42:14 np0005480824 kernel: Yama: becoming mindful.
Oct 10 22:42:14 np0005480824 kernel: SELinux:  Initializing.
Oct 10 22:42:14 np0005480824 kernel: LSM support for eBPF active
Oct 10 22:42:14 np0005480824 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 10 22:42:14 np0005480824 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 10 22:42:14 np0005480824 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 10 22:42:14 np0005480824 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 10 22:42:14 np0005480824 kernel: ... version:                0
Oct 10 22:42:14 np0005480824 kernel: ... bit width:              48
Oct 10 22:42:14 np0005480824 kernel: ... generic registers:      6
Oct 10 22:42:14 np0005480824 kernel: ... value mask:             0000ffffffffffff
Oct 10 22:42:14 np0005480824 kernel: ... max period:             00007fffffffffff
Oct 10 22:42:14 np0005480824 kernel: ... fixed-purpose events:   0
Oct 10 22:42:14 np0005480824 kernel: ... event mask:             000000000000003f
Oct 10 22:42:14 np0005480824 kernel: signal: max sigframe size: 1776
Oct 10 22:42:14 np0005480824 kernel: rcu: Hierarchical SRCU implementation.
Oct 10 22:42:14 np0005480824 kernel: rcu: #011Max phase no-delay instances is 400.
Oct 10 22:42:14 np0005480824 kernel: smp: Bringing up secondary CPUs ...
Oct 10 22:42:14 np0005480824 kernel: smpboot: x86: Booting SMP configuration:
Oct 10 22:42:14 np0005480824 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 10 22:42:14 np0005480824 kernel: smp: Brought up 1 node, 8 CPUs
Oct 10 22:42:14 np0005480824 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Oct 10 22:42:14 np0005480824 kernel: node 0 deferred pages initialised in 13ms
Oct 10 22:42:14 np0005480824 kernel: Memory: 7765956K/8388068K available (16384K kernel code, 5784K rwdata, 13864K rodata, 4188K init, 7196K bss, 616208K reserved, 0K cma-reserved)
Oct 10 22:42:14 np0005480824 kernel: devtmpfs: initialized
Oct 10 22:42:14 np0005480824 kernel: x86/mm: Memory block size: 128MB
Oct 10 22:42:14 np0005480824 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 10 22:42:14 np0005480824 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 10 22:42:14 np0005480824 kernel: pinctrl core: initialized pinctrl subsystem
Oct 10 22:42:14 np0005480824 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 10 22:42:14 np0005480824 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 10 22:42:14 np0005480824 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 10 22:42:14 np0005480824 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 10 22:42:14 np0005480824 kernel: audit: initializing netlink subsys (disabled)
Oct 10 22:42:14 np0005480824 kernel: audit: type=2000 audit(1760150532.568:1): state=initialized audit_enabled=0 res=1
Oct 10 22:42:14 np0005480824 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 10 22:42:14 np0005480824 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 10 22:42:14 np0005480824 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 10 22:42:14 np0005480824 kernel: cpuidle: using governor menu
Oct 10 22:42:14 np0005480824 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 10 22:42:14 np0005480824 kernel: PCI: Using configuration type 1 for base access
Oct 10 22:42:14 np0005480824 kernel: PCI: Using configuration type 1 for extended access
Oct 10 22:42:14 np0005480824 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 10 22:42:14 np0005480824 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 10 22:42:14 np0005480824 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 10 22:42:14 np0005480824 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 10 22:42:14 np0005480824 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 10 22:42:14 np0005480824 kernel: Demotion targets for Node 0: null
Oct 10 22:42:14 np0005480824 kernel: cryptd: max_cpu_qlen set to 1000
Oct 10 22:42:14 np0005480824 kernel: ACPI: Added _OSI(Module Device)
Oct 10 22:42:14 np0005480824 kernel: ACPI: Added _OSI(Processor Device)
Oct 10 22:42:14 np0005480824 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 10 22:42:14 np0005480824 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 10 22:42:14 np0005480824 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 10 22:42:14 np0005480824 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 10 22:42:14 np0005480824 kernel: ACPI: Interpreter enabled
Oct 10 22:42:14 np0005480824 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 10 22:42:14 np0005480824 kernel: ACPI: Using IOAPIC for interrupt routing
Oct 10 22:42:14 np0005480824 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 10 22:42:14 np0005480824 kernel: PCI: Using E820 reservations for host bridge windows
Oct 10 22:42:14 np0005480824 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 10 22:42:14 np0005480824 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 10 22:42:14 np0005480824 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [3] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [4] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [5] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [6] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [7] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [8] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [9] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [10] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [11] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [12] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [13] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [14] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [15] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [16] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [17] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [18] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [19] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [20] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [21] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [22] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [23] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [24] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [25] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [26] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [27] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [28] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [29] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [30] registered
Oct 10 22:42:14 np0005480824 kernel: acpiphp: Slot [31] registered
Oct 10 22:42:14 np0005480824 kernel: PCI host bridge to bus 0000:00
Oct 10 22:42:14 np0005480824 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 10 22:42:14 np0005480824 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 10 22:42:14 np0005480824 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 10 22:42:14 np0005480824 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 10 22:42:14 np0005480824 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 10 22:42:14 np0005480824 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 10 22:42:14 np0005480824 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 10 22:42:14 np0005480824 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 10 22:42:14 np0005480824 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 10 22:42:14 np0005480824 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 10 22:42:14 np0005480824 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 10 22:42:14 np0005480824 kernel: iommu: Default domain type: Translated
Oct 10 22:42:14 np0005480824 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 10 22:42:14 np0005480824 kernel: SCSI subsystem initialized
Oct 10 22:42:14 np0005480824 kernel: ACPI: bus type USB registered
Oct 10 22:42:14 np0005480824 kernel: usbcore: registered new interface driver usbfs
Oct 10 22:42:14 np0005480824 kernel: usbcore: registered new interface driver hub
Oct 10 22:42:14 np0005480824 kernel: usbcore: registered new device driver usb
Oct 10 22:42:14 np0005480824 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 10 22:42:14 np0005480824 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 10 22:42:14 np0005480824 kernel: PTP clock support registered
Oct 10 22:42:14 np0005480824 kernel: EDAC MC: Ver: 3.0.0
Oct 10 22:42:14 np0005480824 kernel: NetLabel: Initializing
Oct 10 22:42:14 np0005480824 kernel: NetLabel:  domain hash size = 128
Oct 10 22:42:14 np0005480824 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 10 22:42:14 np0005480824 kernel: NetLabel:  unlabeled traffic allowed by default
Oct 10 22:42:14 np0005480824 kernel: PCI: Using ACPI for IRQ routing
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 10 22:42:14 np0005480824 kernel: vgaarb: loaded
Oct 10 22:42:14 np0005480824 kernel: clocksource: Switched to clocksource kvm-clock
Oct 10 22:42:14 np0005480824 kernel: VFS: Disk quotas dquot_6.6.0
Oct 10 22:42:14 np0005480824 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 10 22:42:14 np0005480824 kernel: pnp: PnP ACPI init
Oct 10 22:42:14 np0005480824 kernel: pnp: PnP ACPI: found 5 devices
Oct 10 22:42:14 np0005480824 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 10 22:42:14 np0005480824 kernel: NET: Registered PF_INET protocol family
Oct 10 22:42:14 np0005480824 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 10 22:42:14 np0005480824 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 10 22:42:14 np0005480824 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 10 22:42:14 np0005480824 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 10 22:42:14 np0005480824 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 10 22:42:14 np0005480824 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 10 22:42:14 np0005480824 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 10 22:42:14 np0005480824 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 10 22:42:14 np0005480824 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 10 22:42:14 np0005480824 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 10 22:42:14 np0005480824 kernel: NET: Registered PF_XDP protocol family
Oct 10 22:42:14 np0005480824 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 10 22:42:14 np0005480824 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 10 22:42:14 np0005480824 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 10 22:42:14 np0005480824 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 10 22:42:14 np0005480824 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 10 22:42:14 np0005480824 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 10 22:42:14 np0005480824 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 98794 usecs
Oct 10 22:42:14 np0005480824 kernel: PCI: CLS 0 bytes, default 64
Oct 10 22:42:14 np0005480824 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 10 22:42:14 np0005480824 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 10 22:42:14 np0005480824 kernel: Trying to unpack rootfs image as initramfs...
Oct 10 22:42:14 np0005480824 kernel: ACPI: bus type thunderbolt registered
Oct 10 22:42:14 np0005480824 kernel: Initialise system trusted keyrings
Oct 10 22:42:14 np0005480824 kernel: Key type blacklist registered
Oct 10 22:42:14 np0005480824 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 10 22:42:14 np0005480824 kernel: zbud: loaded
Oct 10 22:42:14 np0005480824 kernel: integrity: Platform Keyring initialized
Oct 10 22:42:14 np0005480824 kernel: integrity: Machine keyring initialized
Oct 10 22:42:14 np0005480824 kernel: Freeing initrd memory: 85808K
Oct 10 22:42:14 np0005480824 kernel: NET: Registered PF_ALG protocol family
Oct 10 22:42:14 np0005480824 kernel: xor: automatically using best checksumming function   avx       
Oct 10 22:42:14 np0005480824 kernel: Key type asymmetric registered
Oct 10 22:42:14 np0005480824 kernel: Asymmetric key parser 'x509' registered
Oct 10 22:42:14 np0005480824 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 10 22:42:14 np0005480824 kernel: io scheduler mq-deadline registered
Oct 10 22:42:14 np0005480824 kernel: io scheduler kyber registered
Oct 10 22:42:14 np0005480824 kernel: io scheduler bfq registered
Oct 10 22:42:14 np0005480824 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 10 22:42:14 np0005480824 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 10 22:42:14 np0005480824 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 10 22:42:14 np0005480824 kernel: ACPI: button: Power Button [PWRF]
Oct 10 22:42:14 np0005480824 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 10 22:42:14 np0005480824 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 10 22:42:14 np0005480824 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 10 22:42:14 np0005480824 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 10 22:42:14 np0005480824 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 10 22:42:14 np0005480824 kernel: Non-volatile memory driver v1.3
Oct 10 22:42:14 np0005480824 kernel: rdac: device handler registered
Oct 10 22:42:14 np0005480824 kernel: hp_sw: device handler registered
Oct 10 22:42:14 np0005480824 kernel: emc: device handler registered
Oct 10 22:42:14 np0005480824 kernel: alua: device handler registered
Oct 10 22:42:14 np0005480824 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 10 22:42:14 np0005480824 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 10 22:42:14 np0005480824 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 10 22:42:14 np0005480824 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 10 22:42:14 np0005480824 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 10 22:42:14 np0005480824 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 10 22:42:14 np0005480824 kernel: usb usb1: Product: UHCI Host Controller
Oct 10 22:42:14 np0005480824 kernel: usb usb1: Manufacturer: Linux 5.14.0-621.el9.x86_64 uhci_hcd
Oct 10 22:42:14 np0005480824 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 10 22:42:14 np0005480824 kernel: hub 1-0:1.0: USB hub found
Oct 10 22:42:14 np0005480824 kernel: hub 1-0:1.0: 2 ports detected
Oct 10 22:42:14 np0005480824 kernel: usbcore: registered new interface driver usbserial_generic
Oct 10 22:42:14 np0005480824 kernel: usbserial: USB Serial support registered for generic
Oct 10 22:42:14 np0005480824 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 10 22:42:14 np0005480824 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 10 22:42:14 np0005480824 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 10 22:42:14 np0005480824 kernel: mousedev: PS/2 mouse device common for all mice
Oct 10 22:42:14 np0005480824 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 10 22:42:14 np0005480824 kernel: rtc_cmos 00:04: registered as rtc0
Oct 10 22:42:14 np0005480824 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 10 22:42:14 np0005480824 kernel: rtc_cmos 00:04: setting system clock to 2025-10-11T02:42:13 UTC (1760150533)
Oct 10 22:42:14 np0005480824 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 10 22:42:14 np0005480824 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 10 22:42:14 np0005480824 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 10 22:42:14 np0005480824 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 10 22:42:14 np0005480824 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 10 22:42:14 np0005480824 kernel: usbcore: registered new interface driver usbhid
Oct 10 22:42:14 np0005480824 kernel: usbhid: USB HID core driver
Oct 10 22:42:14 np0005480824 kernel: drop_monitor: Initializing network drop monitor service
Oct 10 22:42:14 np0005480824 kernel: Initializing XFRM netlink socket
Oct 10 22:42:14 np0005480824 kernel: NET: Registered PF_INET6 protocol family
Oct 10 22:42:14 np0005480824 kernel: Segment Routing with IPv6
Oct 10 22:42:14 np0005480824 kernel: NET: Registered PF_PACKET protocol family
Oct 10 22:42:14 np0005480824 kernel: mpls_gso: MPLS GSO support
Oct 10 22:42:14 np0005480824 kernel: IPI shorthand broadcast: enabled
Oct 10 22:42:14 np0005480824 kernel: AVX2 version of gcm_enc/dec engaged.
Oct 10 22:42:14 np0005480824 kernel: AES CTR mode by8 optimization enabled
Oct 10 22:42:14 np0005480824 kernel: sched_clock: Marking stable (1256003360, 146542855)->(1527200751, -124654536)
Oct 10 22:42:14 np0005480824 kernel: registered taskstats version 1
Oct 10 22:42:14 np0005480824 kernel: Loading compiled-in X.509 certificates
Oct 10 22:42:14 np0005480824 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 10 22:42:14 np0005480824 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 10 22:42:14 np0005480824 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 10 22:42:14 np0005480824 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 10 22:42:14 np0005480824 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 10 22:42:14 np0005480824 kernel: Demotion targets for Node 0: null
Oct 10 22:42:14 np0005480824 kernel: page_owner is disabled
Oct 10 22:42:14 np0005480824 kernel: Key type .fscrypt registered
Oct 10 22:42:14 np0005480824 kernel: Key type fscrypt-provisioning registered
Oct 10 22:42:14 np0005480824 kernel: Key type big_key registered
Oct 10 22:42:14 np0005480824 kernel: Key type encrypted registered
Oct 10 22:42:14 np0005480824 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 10 22:42:14 np0005480824 kernel: Loading compiled-in module X.509 certificates
Oct 10 22:42:14 np0005480824 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 10 22:42:14 np0005480824 kernel: ima: Allocated hash algorithm: sha256
Oct 10 22:42:14 np0005480824 kernel: ima: No architecture policies found
Oct 10 22:42:14 np0005480824 kernel: evm: Initialising EVM extended attributes:
Oct 10 22:42:14 np0005480824 kernel: evm: security.selinux
Oct 10 22:42:14 np0005480824 kernel: evm: security.SMACK64 (disabled)
Oct 10 22:42:14 np0005480824 kernel: evm: security.SMACK64EXEC (disabled)
Oct 10 22:42:14 np0005480824 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 10 22:42:14 np0005480824 kernel: evm: security.SMACK64MMAP (disabled)
Oct 10 22:42:14 np0005480824 kernel: evm: security.apparmor (disabled)
Oct 10 22:42:14 np0005480824 kernel: evm: security.ima
Oct 10 22:42:14 np0005480824 kernel: evm: security.capability
Oct 10 22:42:14 np0005480824 kernel: evm: HMAC attrs: 0x1
Oct 10 22:42:14 np0005480824 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 10 22:42:14 np0005480824 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 10 22:42:14 np0005480824 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 10 22:42:14 np0005480824 kernel: usb 1-1: Product: QEMU USB Tablet
Oct 10 22:42:14 np0005480824 kernel: usb 1-1: Manufacturer: QEMU
Oct 10 22:42:14 np0005480824 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 10 22:42:14 np0005480824 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 10 22:42:14 np0005480824 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 10 22:42:14 np0005480824 kernel: Running certificate verification RSA selftest
Oct 10 22:42:14 np0005480824 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 10 22:42:14 np0005480824 kernel: Running certificate verification ECDSA selftest
Oct 10 22:42:14 np0005480824 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 10 22:42:14 np0005480824 kernel: clk: Disabling unused clocks
Oct 10 22:42:14 np0005480824 kernel: Freeing unused decrypted memory: 2028K
Oct 10 22:42:14 np0005480824 kernel: Freeing unused kernel image (initmem) memory: 4188K
Oct 10 22:42:14 np0005480824 kernel: Write protecting the kernel read-only data: 30720k
Oct 10 22:42:14 np0005480824 kernel: Freeing unused kernel image (rodata/data gap) memory: 472K
Oct 10 22:42:14 np0005480824 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 10 22:42:14 np0005480824 kernel: Run /init as init process
Oct 10 22:42:14 np0005480824 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 10 22:42:14 np0005480824 systemd: Detected virtualization kvm.
Oct 10 22:42:14 np0005480824 systemd: Detected architecture x86-64.
Oct 10 22:42:14 np0005480824 systemd: Running in initrd.
Oct 10 22:42:14 np0005480824 systemd: No hostname configured, using default hostname.
Oct 10 22:42:14 np0005480824 systemd: Hostname set to <localhost>.
Oct 10 22:42:14 np0005480824 systemd: Initializing machine ID from VM UUID.
Oct 10 22:42:14 np0005480824 systemd: Queued start job for default target Initrd Default Target.
Oct 10 22:42:14 np0005480824 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct 10 22:42:14 np0005480824 systemd: Reached target Local Encrypted Volumes.
Oct 10 22:42:14 np0005480824 systemd: Reached target Initrd /usr File System.
Oct 10 22:42:14 np0005480824 systemd: Reached target Local File Systems.
Oct 10 22:42:14 np0005480824 systemd: Reached target Path Units.
Oct 10 22:42:14 np0005480824 systemd: Reached target Slice Units.
Oct 10 22:42:14 np0005480824 systemd: Reached target Swaps.
Oct 10 22:42:14 np0005480824 systemd: Reached target Timer Units.
Oct 10 22:42:14 np0005480824 systemd: Listening on D-Bus System Message Bus Socket.
Oct 10 22:42:14 np0005480824 systemd: Listening on Journal Socket (/dev/log).
Oct 10 22:42:14 np0005480824 systemd: Listening on Journal Socket.
Oct 10 22:42:14 np0005480824 systemd: Listening on udev Control Socket.
Oct 10 22:42:14 np0005480824 systemd: Listening on udev Kernel Socket.
Oct 10 22:42:14 np0005480824 systemd: Reached target Socket Units.
Oct 10 22:42:14 np0005480824 systemd: Starting Create List of Static Device Nodes...
Oct 10 22:42:14 np0005480824 systemd: Starting Journal Service...
Oct 10 22:42:14 np0005480824 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 10 22:42:14 np0005480824 systemd: Starting Apply Kernel Variables...
Oct 10 22:42:14 np0005480824 systemd: Starting Create System Users...
Oct 10 22:42:14 np0005480824 systemd: Starting Setup Virtual Console...
Oct 10 22:42:14 np0005480824 systemd: Finished Create List of Static Device Nodes.
Oct 10 22:42:14 np0005480824 systemd: Finished Apply Kernel Variables.
Oct 10 22:42:14 np0005480824 systemd: Finished Create System Users.
Oct 10 22:42:14 np0005480824 systemd-journald[305]: Journal started
Oct 10 22:42:14 np0005480824 systemd-journald[305]: Runtime Journal (/run/log/journal/fb3a2fb19efa43f0a057bf422ac6b8d7) is 8.0M, max 153.6M, 145.6M free.
Oct 10 22:42:14 np0005480824 systemd-sysusers[310]: Creating group 'users' with GID 100.
Oct 10 22:42:14 np0005480824 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Oct 10 22:42:14 np0005480824 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 10 22:42:14 np0005480824 systemd: Started Journal Service.
Oct 10 22:42:14 np0005480824 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 10 22:42:14 np0005480824 systemd[1]: Starting Create Volatile Files and Directories...
Oct 10 22:42:14 np0005480824 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 10 22:42:14 np0005480824 systemd[1]: Finished Create Volatile Files and Directories.
Oct 10 22:42:14 np0005480824 systemd[1]: Finished Setup Virtual Console.
Oct 10 22:42:14 np0005480824 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 10 22:42:14 np0005480824 systemd[1]: Starting dracut cmdline hook...
Oct 10 22:42:14 np0005480824 dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Oct 10 22:42:14 np0005480824 dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 10 22:42:14 np0005480824 systemd[1]: Finished dracut cmdline hook.
Oct 10 22:42:14 np0005480824 systemd[1]: Starting dracut pre-udev hook...
Oct 10 22:42:14 np0005480824 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 10 22:42:14 np0005480824 kernel: device-mapper: uevent: version 1.0.3
Oct 10 22:42:14 np0005480824 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 10 22:42:14 np0005480824 kernel: RPC: Registered named UNIX socket transport module.
Oct 10 22:42:14 np0005480824 kernel: RPC: Registered udp transport module.
Oct 10 22:42:14 np0005480824 kernel: RPC: Registered tcp transport module.
Oct 10 22:42:14 np0005480824 kernel: RPC: Registered tcp-with-tls transport module.
Oct 10 22:42:14 np0005480824 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 10 22:42:15 np0005480824 rpc.statd[441]: Version 2.5.4 starting
Oct 10 22:42:15 np0005480824 rpc.statd[441]: Initializing NSM state
Oct 10 22:42:15 np0005480824 rpc.idmapd[446]: Setting log level to 0
Oct 10 22:42:15 np0005480824 systemd[1]: Finished dracut pre-udev hook.
Oct 10 22:42:15 np0005480824 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 10 22:42:15 np0005480824 systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Oct 10 22:42:15 np0005480824 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 10 22:42:15 np0005480824 systemd[1]: Starting dracut pre-trigger hook...
Oct 10 22:42:15 np0005480824 systemd[1]: Finished dracut pre-trigger hook.
Oct 10 22:42:15 np0005480824 systemd[1]: Starting Coldplug All udev Devices...
Oct 10 22:42:15 np0005480824 systemd[1]: Created slice Slice /system/modprobe.
Oct 10 22:42:15 np0005480824 systemd[1]: Starting Load Kernel Module configfs...
Oct 10 22:42:15 np0005480824 systemd[1]: Finished Coldplug All udev Devices.
Oct 10 22:42:15 np0005480824 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 10 22:42:15 np0005480824 systemd[1]: Finished Load Kernel Module configfs.
Oct 10 22:42:15 np0005480824 systemd[1]: Mounting Kernel Configuration File System...
Oct 10 22:42:15 np0005480824 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 10 22:42:15 np0005480824 systemd[1]: Reached target Network.
Oct 10 22:42:15 np0005480824 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 10 22:42:15 np0005480824 systemd[1]: Starting dracut initqueue hook...
Oct 10 22:42:15 np0005480824 systemd[1]: Mounted Kernel Configuration File System.
Oct 10 22:42:15 np0005480824 systemd[1]: Reached target System Initialization.
Oct 10 22:42:15 np0005480824 systemd[1]: Reached target Basic System.
Oct 10 22:42:15 np0005480824 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 10 22:42:15 np0005480824 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 10 22:42:15 np0005480824 kernel: vda: vda1
Oct 10 22:42:15 np0005480824 kernel: scsi host0: ata_piix
Oct 10 22:42:15 np0005480824 kernel: scsi host1: ata_piix
Oct 10 22:42:15 np0005480824 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 10 22:42:15 np0005480824 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 10 22:42:15 np0005480824 systemd-udevd[483]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 22:42:15 np0005480824 systemd[1]: Found device /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 10 22:42:15 np0005480824 systemd[1]: Reached target Initrd Root Device.
Oct 10 22:42:15 np0005480824 kernel: ata1: found unknown device (class 0)
Oct 10 22:42:15 np0005480824 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 10 22:42:15 np0005480824 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 10 22:42:15 np0005480824 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 10 22:42:15 np0005480824 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 10 22:42:15 np0005480824 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 10 22:42:15 np0005480824 systemd[1]: Finished dracut initqueue hook.
Oct 10 22:42:15 np0005480824 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 10 22:42:15 np0005480824 systemd[1]: Reached target Remote Encrypted Volumes.
Oct 10 22:42:15 np0005480824 systemd[1]: Reached target Remote File Systems.
Oct 10 22:42:15 np0005480824 systemd[1]: Starting dracut pre-mount hook...
Oct 10 22:42:15 np0005480824 systemd[1]: Finished dracut pre-mount hook.
Oct 10 22:42:15 np0005480824 systemd[1]: Starting File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3...
Oct 10 22:42:15 np0005480824 systemd-fsck[550]: /usr/sbin/fsck.xfs: XFS file system.
Oct 10 22:42:15 np0005480824 systemd[1]: Finished File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 10 22:42:15 np0005480824 systemd[1]: Mounting /sysroot...
Oct 10 22:42:16 np0005480824 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 10 22:42:16 np0005480824 kernel: XFS (vda1): Mounting V5 Filesystem 9839e2e1-98a2-4594-b609-79d514deb0a3
Oct 10 22:42:16 np0005480824 kernel: XFS (vda1): Ending clean mount
Oct 10 22:42:16 np0005480824 systemd[1]: Mounted /sysroot.
Oct 10 22:42:16 np0005480824 systemd[1]: Reached target Initrd Root File System.
Oct 10 22:42:16 np0005480824 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 10 22:42:16 np0005480824 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 10 22:42:16 np0005480824 systemd[1]: Reached target Initrd File Systems.
Oct 10 22:42:16 np0005480824 systemd[1]: Reached target Initrd Default Target.
Oct 10 22:42:16 np0005480824 systemd[1]: Starting dracut mount hook...
Oct 10 22:42:16 np0005480824 systemd[1]: Finished dracut mount hook.
Oct 10 22:42:16 np0005480824 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 10 22:42:16 np0005480824 rpc.idmapd[446]: exiting on signal 15
Oct 10 22:42:16 np0005480824 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 10 22:42:16 np0005480824 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Network.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Timer Units.
Oct 10 22:42:16 np0005480824 systemd[1]: dbus.socket: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 10 22:42:16 np0005480824 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Initrd Default Target.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Basic System.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Initrd Root Device.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Initrd /usr File System.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Path Units.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Remote File Systems.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Slice Units.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Socket Units.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target System Initialization.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Local File Systems.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Swaps.
Oct 10 22:42:16 np0005480824 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped dracut mount hook.
Oct 10 22:42:16 np0005480824 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped dracut pre-mount hook.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped target Local Encrypted Volumes.
Oct 10 22:42:16 np0005480824 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 10 22:42:16 np0005480824 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped dracut initqueue hook.
Oct 10 22:42:16 np0005480824 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped Apply Kernel Variables.
Oct 10 22:42:16 np0005480824 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped Create Volatile Files and Directories.
Oct 10 22:42:16 np0005480824 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped Coldplug All udev Devices.
Oct 10 22:42:16 np0005480824 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped dracut pre-trigger hook.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 10 22:42:16 np0005480824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped Setup Virtual Console.
Oct 10 22:42:16 np0005480824 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 10 22:42:16 np0005480824 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 10 22:42:16 np0005480824 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Closed udev Control Socket.
Oct 10 22:42:16 np0005480824 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Closed udev Kernel Socket.
Oct 10 22:42:16 np0005480824 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped dracut pre-udev hook.
Oct 10 22:42:16 np0005480824 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped dracut cmdline hook.
Oct 10 22:42:16 np0005480824 systemd[1]: Starting Cleanup udev Database...
Oct 10 22:42:16 np0005480824 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 10 22:42:16 np0005480824 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped Create List of Static Device Nodes.
Oct 10 22:42:16 np0005480824 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Stopped Create System Users.
Oct 10 22:42:16 np0005480824 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 10 22:42:16 np0005480824 systemd[1]: Finished Cleanup udev Database.
Oct 10 22:42:16 np0005480824 systemd[1]: Reached target Switch Root.
Oct 10 22:42:16 np0005480824 systemd[1]: Starting Switch Root...
Oct 10 22:42:16 np0005480824 systemd[1]: Switching root.
Oct 10 22:42:16 np0005480824 systemd-journald[305]: Journal stopped
Oct 10 22:42:17 np0005480824 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct 10 22:42:17 np0005480824 kernel: audit: type=1404 audit(1760150537.018:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 10 22:42:17 np0005480824 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 22:42:17 np0005480824 kernel: SELinux:  policy capability open_perms=1
Oct 10 22:42:17 np0005480824 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 22:42:17 np0005480824 kernel: SELinux:  policy capability always_check_network=0
Oct 10 22:42:17 np0005480824 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 22:42:17 np0005480824 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 22:42:17 np0005480824 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 22:42:17 np0005480824 kernel: audit: type=1403 audit(1760150537.160:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 10 22:42:17 np0005480824 systemd: Successfully loaded SELinux policy in 146.088ms.
Oct 10 22:42:17 np0005480824 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.367ms.
Oct 10 22:42:17 np0005480824 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 10 22:42:17 np0005480824 systemd: Detected virtualization kvm.
Oct 10 22:42:17 np0005480824 systemd: Detected architecture x86-64.
Oct 10 22:42:17 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 22:42:17 np0005480824 systemd: initrd-switch-root.service: Deactivated successfully.
Oct 10 22:42:17 np0005480824 systemd: Stopped Switch Root.
Oct 10 22:42:17 np0005480824 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 10 22:42:17 np0005480824 systemd: Created slice Slice /system/getty.
Oct 10 22:42:17 np0005480824 systemd: Created slice Slice /system/serial-getty.
Oct 10 22:42:17 np0005480824 systemd: Created slice Slice /system/sshd-keygen.
Oct 10 22:42:17 np0005480824 systemd: Created slice User and Session Slice.
Oct 10 22:42:17 np0005480824 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct 10 22:42:17 np0005480824 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct 10 22:42:17 np0005480824 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 10 22:42:17 np0005480824 systemd: Reached target Local Encrypted Volumes.
Oct 10 22:42:17 np0005480824 systemd: Stopped target Switch Root.
Oct 10 22:42:17 np0005480824 systemd: Stopped target Initrd File Systems.
Oct 10 22:42:17 np0005480824 systemd: Stopped target Initrd Root File System.
Oct 10 22:42:17 np0005480824 systemd: Reached target Local Integrity Protected Volumes.
Oct 10 22:42:17 np0005480824 systemd: Reached target Path Units.
Oct 10 22:42:17 np0005480824 systemd: Reached target rpc_pipefs.target.
Oct 10 22:42:17 np0005480824 systemd: Reached target Slice Units.
Oct 10 22:42:17 np0005480824 systemd: Reached target Swaps.
Oct 10 22:42:17 np0005480824 systemd: Reached target Local Verity Protected Volumes.
Oct 10 22:42:17 np0005480824 systemd: Listening on RPCbind Server Activation Socket.
Oct 10 22:42:17 np0005480824 systemd: Reached target RPC Port Mapper.
Oct 10 22:42:17 np0005480824 systemd: Listening on Process Core Dump Socket.
Oct 10 22:42:17 np0005480824 systemd: Listening on initctl Compatibility Named Pipe.
Oct 10 22:42:17 np0005480824 systemd: Listening on udev Control Socket.
Oct 10 22:42:17 np0005480824 systemd: Listening on udev Kernel Socket.
Oct 10 22:42:17 np0005480824 systemd: Mounting Huge Pages File System...
Oct 10 22:42:17 np0005480824 systemd: Mounting POSIX Message Queue File System...
Oct 10 22:42:17 np0005480824 systemd: Mounting Kernel Debug File System...
Oct 10 22:42:17 np0005480824 systemd: Mounting Kernel Trace File System...
Oct 10 22:42:17 np0005480824 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 10 22:42:17 np0005480824 systemd: Starting Create List of Static Device Nodes...
Oct 10 22:42:17 np0005480824 systemd: Starting Load Kernel Module configfs...
Oct 10 22:42:17 np0005480824 systemd: Starting Load Kernel Module drm...
Oct 10 22:42:17 np0005480824 systemd: Starting Load Kernel Module efi_pstore...
Oct 10 22:42:17 np0005480824 systemd: Starting Load Kernel Module fuse...
Oct 10 22:42:17 np0005480824 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 10 22:42:17 np0005480824 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct 10 22:42:17 np0005480824 systemd: Stopped File System Check on Root Device.
Oct 10 22:42:17 np0005480824 systemd: Stopped Journal Service.
Oct 10 22:42:17 np0005480824 systemd: Starting Journal Service...
Oct 10 22:42:17 np0005480824 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 10 22:42:17 np0005480824 systemd: Starting Generate network units from Kernel command line...
Oct 10 22:42:17 np0005480824 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 10 22:42:17 np0005480824 systemd: Starting Remount Root and Kernel File Systems...
Oct 10 22:42:17 np0005480824 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 10 22:42:17 np0005480824 systemd: Starting Apply Kernel Variables...
Oct 10 22:42:17 np0005480824 kernel: fuse: init (API version 7.37)
Oct 10 22:42:17 np0005480824 systemd: Starting Coldplug All udev Devices...
Oct 10 22:42:17 np0005480824 systemd: Mounted Huge Pages File System.
Oct 10 22:42:17 np0005480824 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 10 22:42:17 np0005480824 systemd: Mounted POSIX Message Queue File System.
Oct 10 22:42:17 np0005480824 systemd: Mounted Kernel Debug File System.
Oct 10 22:42:17 np0005480824 systemd: Mounted Kernel Trace File System.
Oct 10 22:42:17 np0005480824 systemd: Finished Create List of Static Device Nodes.
Oct 10 22:42:17 np0005480824 systemd: modprobe@configfs.service: Deactivated successfully.
Oct 10 22:42:17 np0005480824 systemd: Finished Load Kernel Module configfs.
Oct 10 22:42:17 np0005480824 systemd-journald[674]: Journal started
Oct 10 22:42:17 np0005480824 systemd-journald[674]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 10 22:42:17 np0005480824 systemd[1]: Queued start job for default target Multi-User System.
Oct 10 22:42:17 np0005480824 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 10 22:42:17 np0005480824 systemd: Started Journal Service.
Oct 10 22:42:17 np0005480824 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 10 22:42:17 np0005480824 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 10 22:42:17 np0005480824 kernel: ACPI: bus type drm_connector registered
Oct 10 22:42:17 np0005480824 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 10 22:42:17 np0005480824 systemd[1]: Finished Load Kernel Module fuse.
Oct 10 22:42:17 np0005480824 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 10 22:42:17 np0005480824 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 10 22:42:17 np0005480824 systemd[1]: Finished Load Kernel Module drm.
Oct 10 22:42:17 np0005480824 systemd[1]: Finished Generate network units from Kernel command line.
Oct 10 22:42:17 np0005480824 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 10 22:42:17 np0005480824 systemd[1]: Finished Apply Kernel Variables.
Oct 10 22:42:17 np0005480824 systemd[1]: Mounting FUSE Control File System...
Oct 10 22:42:17 np0005480824 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 10 22:42:17 np0005480824 systemd[1]: Starting Rebuild Hardware Database...
Oct 10 22:42:17 np0005480824 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 10 22:42:17 np0005480824 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 10 22:42:17 np0005480824 systemd[1]: Starting Load/Save OS Random Seed...
Oct 10 22:42:17 np0005480824 systemd-journald[674]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 10 22:42:17 np0005480824 systemd-journald[674]: Received client request to flush runtime journal.
Oct 10 22:42:17 np0005480824 systemd[1]: Starting Create System Users...
Oct 10 22:42:17 np0005480824 systemd[1]: Mounted FUSE Control File System.
Oct 10 22:42:17 np0005480824 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 10 22:42:17 np0005480824 systemd[1]: Finished Coldplug All udev Devices.
Oct 10 22:42:17 np0005480824 systemd[1]: Finished Load/Save OS Random Seed.
Oct 10 22:42:17 np0005480824 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 10 22:42:17 np0005480824 systemd[1]: Finished Create System Users.
Oct 10 22:42:17 np0005480824 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 10 22:42:18 np0005480824 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 10 22:42:18 np0005480824 systemd[1]: Reached target Preparation for Local File Systems.
Oct 10 22:42:18 np0005480824 systemd[1]: Reached target Local File Systems.
Oct 10 22:42:18 np0005480824 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 10 22:42:18 np0005480824 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 10 22:42:18 np0005480824 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 10 22:42:18 np0005480824 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 10 22:42:18 np0005480824 systemd[1]: Starting Automatic Boot Loader Update...
Oct 10 22:42:18 np0005480824 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 10 22:42:18 np0005480824 systemd[1]: Starting Create Volatile Files and Directories...
Oct 10 22:42:18 np0005480824 bootctl[694]: Couldn't find EFI system partition, skipping.
Oct 10 22:42:18 np0005480824 systemd[1]: Finished Automatic Boot Loader Update.
Oct 10 22:42:18 np0005480824 systemd[1]: Finished Create Volatile Files and Directories.
Oct 10 22:42:18 np0005480824 systemd[1]: Starting Security Auditing Service...
Oct 10 22:42:18 np0005480824 systemd[1]: Starting RPC Bind...
Oct 10 22:42:18 np0005480824 systemd[1]: Starting Rebuild Journal Catalog...
Oct 10 22:42:18 np0005480824 auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 10 22:42:18 np0005480824 auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 10 22:42:18 np0005480824 systemd[1]: Started RPC Bind.
Oct 10 22:42:18 np0005480824 systemd[1]: Finished Rebuild Journal Catalog.
Oct 10 22:42:18 np0005480824 augenrules[706]: /sbin/augenrules: No change
Oct 10 22:42:18 np0005480824 augenrules[721]: No rules
Oct 10 22:42:18 np0005480824 augenrules[721]: enabled 1
Oct 10 22:42:18 np0005480824 augenrules[721]: failure 1
Oct 10 22:42:18 np0005480824 augenrules[721]: pid 700
Oct 10 22:42:18 np0005480824 augenrules[721]: rate_limit 0
Oct 10 22:42:18 np0005480824 augenrules[721]: backlog_limit 8192
Oct 10 22:42:18 np0005480824 augenrules[721]: lost 0
Oct 10 22:42:18 np0005480824 augenrules[721]: backlog 2
Oct 10 22:42:18 np0005480824 augenrules[721]: backlog_wait_time 60000
Oct 10 22:42:18 np0005480824 augenrules[721]: backlog_wait_time_actual 0
Oct 10 22:42:18 np0005480824 augenrules[721]: enabled 1
Oct 10 22:42:18 np0005480824 augenrules[721]: failure 1
Oct 10 22:42:18 np0005480824 augenrules[721]: pid 700
Oct 10 22:42:18 np0005480824 augenrules[721]: rate_limit 0
Oct 10 22:42:18 np0005480824 augenrules[721]: backlog_limit 8192
Oct 10 22:42:18 np0005480824 augenrules[721]: lost 0
Oct 10 22:42:18 np0005480824 augenrules[721]: backlog 0
Oct 10 22:42:18 np0005480824 augenrules[721]: backlog_wait_time 60000
Oct 10 22:42:18 np0005480824 augenrules[721]: backlog_wait_time_actual 0
Oct 10 22:42:18 np0005480824 augenrules[721]: enabled 1
Oct 10 22:42:18 np0005480824 augenrules[721]: failure 1
Oct 10 22:42:18 np0005480824 augenrules[721]: pid 700
Oct 10 22:42:18 np0005480824 augenrules[721]: rate_limit 0
Oct 10 22:42:18 np0005480824 augenrules[721]: backlog_limit 8192
Oct 10 22:42:18 np0005480824 augenrules[721]: lost 0
Oct 10 22:42:18 np0005480824 augenrules[721]: backlog 0
Oct 10 22:42:18 np0005480824 augenrules[721]: backlog_wait_time 60000
Oct 10 22:42:18 np0005480824 augenrules[721]: backlog_wait_time_actual 0
Oct 10 22:42:18 np0005480824 systemd[1]: Started Security Auditing Service.
Oct 10 22:42:18 np0005480824 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 10 22:42:18 np0005480824 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 10 22:42:18 np0005480824 systemd[1]: Finished Rebuild Hardware Database.
Oct 10 22:42:18 np0005480824 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 10 22:42:18 np0005480824 systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Oct 10 22:42:18 np0005480824 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 10 22:42:18 np0005480824 systemd[1]: Starting Update is Completed...
Oct 10 22:42:18 np0005480824 systemd[1]: Finished Update is Completed.
Oct 10 22:42:18 np0005480824 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 10 22:42:18 np0005480824 systemd[1]: Reached target System Initialization.
Oct 10 22:42:18 np0005480824 systemd[1]: Started dnf makecache --timer.
Oct 10 22:42:18 np0005480824 systemd[1]: Started Daily rotation of log files.
Oct 10 22:42:18 np0005480824 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 10 22:42:18 np0005480824 systemd[1]: Reached target Timer Units.
Oct 10 22:42:18 np0005480824 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 10 22:42:18 np0005480824 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 10 22:42:18 np0005480824 systemd[1]: Reached target Socket Units.
Oct 10 22:42:18 np0005480824 systemd[1]: Starting D-Bus System Message Bus...
Oct 10 22:42:18 np0005480824 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 10 22:42:18 np0005480824 systemd[1]: Starting Load Kernel Module configfs...
Oct 10 22:42:18 np0005480824 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 10 22:42:18 np0005480824 systemd[1]: Finished Load Kernel Module configfs.
Oct 10 22:42:18 np0005480824 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 10 22:42:18 np0005480824 systemd[1]: Started D-Bus System Message Bus.
Oct 10 22:42:18 np0005480824 systemd[1]: Reached target Basic System.
Oct 10 22:42:18 np0005480824 dbus-broker-lau[738]: Ready
Oct 10 22:42:18 np0005480824 systemd-udevd[736]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 22:42:18 np0005480824 systemd[1]: Starting NTP client/server...
Oct 10 22:42:18 np0005480824 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 10 22:42:18 np0005480824 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 10 22:42:18 np0005480824 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 10 22:42:18 np0005480824 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 10 22:42:18 np0005480824 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 10 22:42:18 np0005480824 systemd[1]: Starting IPv4 firewall with iptables...
Oct 10 22:42:18 np0005480824 systemd[1]: Started irqbalance daemon.
Oct 10 22:42:18 np0005480824 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 10 22:42:18 np0005480824 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 22:42:18 np0005480824 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 22:42:18 np0005480824 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 22:42:18 np0005480824 systemd[1]: Reached target sshd-keygen.target.
Oct 10 22:42:18 np0005480824 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 10 22:42:18 np0005480824 systemd[1]: Reached target User and Group Name Lookups.
Oct 10 22:42:18 np0005480824 systemd[1]: Starting User Login Management...
Oct 10 22:42:18 np0005480824 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 10 22:42:18 np0005480824 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 10 22:42:18 np0005480824 chronyd[794]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 10 22:42:18 np0005480824 chronyd[794]: Loaded 0 symmetric keys
Oct 10 22:42:18 np0005480824 chronyd[794]: Using right/UTC timezone to obtain leap second data
Oct 10 22:42:18 np0005480824 chronyd[794]: Loaded seccomp filter (level 2)
Oct 10 22:42:18 np0005480824 systemd[1]: Started NTP client/server.
Oct 10 22:42:18 np0005480824 systemd-logind[782]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 10 22:42:18 np0005480824 systemd-logind[782]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 10 22:42:18 np0005480824 systemd-logind[782]: New seat seat0.
Oct 10 22:42:18 np0005480824 systemd[1]: Started User Login Management.
Oct 10 22:42:18 np0005480824 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 10 22:42:18 np0005480824 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 10 22:42:18 np0005480824 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 10 22:42:18 np0005480824 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 10 22:42:18 np0005480824 kernel: Console: switching to colour dummy device 80x25
Oct 10 22:42:18 np0005480824 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 10 22:42:18 np0005480824 kernel: [drm] features: -context_init
Oct 10 22:42:18 np0005480824 kernel: [drm] number of scanouts: 1
Oct 10 22:42:18 np0005480824 kernel: [drm] number of cap sets: 0
Oct 10 22:42:18 np0005480824 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 10 22:42:18 np0005480824 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 10 22:42:18 np0005480824 kernel: Console: switching to colour frame buffer device 128x48
Oct 10 22:42:18 np0005480824 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 10 22:42:18 np0005480824 kernel: kvm_amd: TSC scaling supported
Oct 10 22:42:18 np0005480824 kernel: kvm_amd: Nested Virtualization enabled
Oct 10 22:42:18 np0005480824 kernel: kvm_amd: Nested Paging enabled
Oct 10 22:42:18 np0005480824 kernel: kvm_amd: LBR virtualization supported
Oct 10 22:42:18 np0005480824 iptables.init[776]: iptables: Applying firewall rules: [  OK  ]
Oct 10 22:42:18 np0005480824 systemd[1]: Finished IPv4 firewall with iptables.
Oct 10 22:42:19 np0005480824 cloud-init[837]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 11 Oct 2025 02:42:19 +0000. Up 7.02 seconds.
Oct 10 22:42:19 np0005480824 systemd[1]: run-cloud\x2dinit-tmp-tmp2o75_771.mount: Deactivated successfully.
Oct 10 22:42:19 np0005480824 systemd[1]: Starting Hostname Service...
Oct 10 22:42:19 np0005480824 systemd[1]: Started Hostname Service.
Oct 10 22:42:19 np0005480824 systemd-hostnamed[852]: Hostname set to <np0005480824.novalocal> (static)
Oct 10 22:42:19 np0005480824 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 10 22:42:19 np0005480824 systemd[1]: Reached target Preparation for Network.
Oct 10 22:42:19 np0005480824 systemd[1]: Starting Network Manager...
Oct 10 22:42:19 np0005480824 NetworkManager[856]: <info>  [1760150539.9518] NetworkManager (version 1.54.1-1.el9) is starting... (boot:37201e2b-068e-4723-93b2-e25c9bbc9f0f)
Oct 10 22:42:19 np0005480824 NetworkManager[856]: <info>  [1760150539.9526] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 10 22:42:19 np0005480824 NetworkManager[856]: <info>  [1760150539.9715] manager[0x55fd406dc080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 10 22:42:19 np0005480824 NetworkManager[856]: <info>  [1760150539.9763] hostname: hostname: using hostnamed
Oct 10 22:42:19 np0005480824 NetworkManager[856]: <info>  [1760150539.9765] hostname: static hostname changed from (none) to "np0005480824.novalocal"
Oct 10 22:42:19 np0005480824 NetworkManager[856]: <info>  [1760150539.9773] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 10 22:42:19 np0005480824 NetworkManager[856]: <info>  [1760150539.9913] manager[0x55fd406dc080]: rfkill: Wi-Fi hardware radio set enabled
Oct 10 22:42:19 np0005480824 NetworkManager[856]: <info>  [1760150539.9914] manager[0x55fd406dc080]: rfkill: WWAN hardware radio set enabled
Oct 10 22:42:19 np0005480824 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0001] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0002] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0003] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0003] manager: Networking is enabled by state file
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0005] settings: Loaded settings plugin: keyfile (internal)
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0039] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0069] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0096] dhcp: init: Using DHCP client 'internal'
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0099] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0114] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0129] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0138] device (lo): Activation: starting connection 'lo' (0021d04c-82f9-4da3-814c-50b07db9d2ee)
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0149] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0154] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0182] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0187] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0189] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0191] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0193] device (eth0): carrier: link connected
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0198] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 22:42:20 np0005480824 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0206] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0216] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0222] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0223] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0227] manager: NetworkManager state is now CONNECTING
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0228] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0239] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0243] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 22:42:20 np0005480824 systemd[1]: Started Network Manager.
Oct 10 22:42:20 np0005480824 systemd[1]: Reached target Network.
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0289] dhcp4 (eth0): state changed new lease, address=38.102.83.68
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0298] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0320] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 22:42:20 np0005480824 systemd[1]: Starting Network Manager Wait Online...
Oct 10 22:42:20 np0005480824 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 10 22:42:20 np0005480824 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0541] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0542] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0548] device (lo): Activation: successful, device activated.
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0557] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0558] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0562] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0565] device (eth0): Activation: successful, device activated.
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0573] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 10 22:42:20 np0005480824 NetworkManager[856]: <info>  [1760150540.0577] manager: startup complete
Oct 10 22:42:20 np0005480824 systemd[1]: Finished Network Manager Wait Online.
Oct 10 22:42:20 np0005480824 systemd[1]: Started GSSAPI Proxy Daemon.
Oct 10 22:42:20 np0005480824 systemd[1]: Starting Cloud-init: Network Stage...
Oct 10 22:42:20 np0005480824 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 10 22:42:20 np0005480824 systemd[1]: Reached target NFS client services.
Oct 10 22:42:20 np0005480824 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 10 22:42:20 np0005480824 systemd[1]: Reached target Remote File Systems.
Oct 10 22:42:20 np0005480824 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 10 22:42:20 np0005480824 cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 11 Oct 2025 02:42:20 +0000. Up 8.09 seconds.
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: |  eth0  | True |         38.102.83.68         | 255.255.255.0 | global | fa:16:3e:00:79:66 |
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fe00:7966/64 |       .       |  link  | fa:16:3e:00:79:66 |
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 10 22:42:20 np0005480824 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 10 22:42:21 np0005480824 cloud-init[920]: Generating public/private rsa key pair.
Oct 10 22:42:21 np0005480824 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 10 22:42:21 np0005480824 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 10 22:42:21 np0005480824 cloud-init[920]: The key fingerprint is:
Oct 10 22:42:21 np0005480824 cloud-init[920]: SHA256:u9rn5EBNw47tekeG5PnFUh5Z6er4SlbuUmCn5xMIW3I root@np0005480824.novalocal
Oct 10 22:42:21 np0005480824 cloud-init[920]: The key's randomart image is:
Oct 10 22:42:21 np0005480824 cloud-init[920]: +---[RSA 3072]----+
Oct 10 22:42:21 np0005480824 cloud-init[920]: |                .|
Oct 10 22:42:21 np0005480824 cloud-init[920]: |         .     ..|
Oct 10 22:42:21 np0005480824 cloud-init[920]: |          +   .o |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |         O.E .+. |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |        So@o==.. |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |       . ++o*=+  |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |        o o**+.  |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |       . *=o++   |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |      ..++oo+o.  |
Oct 10 22:42:21 np0005480824 cloud-init[920]: +----[SHA256]-----+
Oct 10 22:42:21 np0005480824 cloud-init[920]: Generating public/private ecdsa key pair.
Oct 10 22:42:21 np0005480824 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 10 22:42:21 np0005480824 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 10 22:42:21 np0005480824 cloud-init[920]: The key fingerprint is:
Oct 10 22:42:21 np0005480824 cloud-init[920]: SHA256:lRO68rLHgazxvmSBg/d32yEfbOwjw1t/UoM185O5eNY root@np0005480824.novalocal
Oct 10 22:42:21 np0005480824 cloud-init[920]: The key's randomart image is:
Oct 10 22:42:21 np0005480824 cloud-init[920]: +---[ECDSA 256]---+
Oct 10 22:42:21 np0005480824 cloud-init[920]: |          .      |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |         . o     |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |        . +      |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |   . .   o .   + |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |  . +.o.S     o *|
Oct 10 22:42:21 np0005480824 cloud-init[920]: |   ..oo+.  o . =o|
Oct 10 22:42:21 np0005480824 cloud-init[920]: |     +=.ooo B ..=|
Oct 10 22:42:21 np0005480824 cloud-init[920]: |    .o.+o.+O.=.+E|
Oct 10 22:42:21 np0005480824 cloud-init[920]: |     .+o  o++.+o |
Oct 10 22:42:21 np0005480824 cloud-init[920]: +----[SHA256]-----+
Oct 10 22:42:21 np0005480824 cloud-init[920]: Generating public/private ed25519 key pair.
Oct 10 22:42:21 np0005480824 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 10 22:42:21 np0005480824 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 10 22:42:21 np0005480824 cloud-init[920]: The key fingerprint is:
Oct 10 22:42:21 np0005480824 cloud-init[920]: SHA256:6ldOyiwpPVjGT1DeqXhNrz6iHqETStj+8TQKcNO/BW4 root@np0005480824.novalocal
Oct 10 22:42:21 np0005480824 cloud-init[920]: The key's randomart image is:
Oct 10 22:42:21 np0005480824 cloud-init[920]: +--[ED25519 256]--+
Oct 10 22:42:21 np0005480824 cloud-init[920]: |                 |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |        .        |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |       o . .     |
Oct 10 22:42:21 np0005480824 cloud-init[920]: | o .  . . +      |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |o = o.ooS+ .     |
Oct 10 22:42:21 np0005480824 cloud-init[920]: | = o ==++ + .    |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |  + +=EB.= .     |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |   ooO==B +      |
Oct 10 22:42:21 np0005480824 cloud-init[920]: |    oo*= o..     |
Oct 10 22:42:21 np0005480824 cloud-init[920]: +----[SHA256]-----+
Oct 10 22:42:21 np0005480824 sm-notify[1003]: Version 2.5.4 starting
Oct 10 22:42:21 np0005480824 systemd[1]: Finished Cloud-init: Network Stage.
Oct 10 22:42:21 np0005480824 systemd[1]: Reached target Cloud-config availability.
Oct 10 22:42:21 np0005480824 systemd[1]: Reached target Network is Online.
Oct 10 22:42:21 np0005480824 systemd[1]: Starting Cloud-init: Config Stage...
Oct 10 22:42:21 np0005480824 systemd[1]: Starting Notify NFS peers of a restart...
Oct 10 22:42:21 np0005480824 systemd[1]: Starting System Logging Service...
Oct 10 22:42:21 np0005480824 systemd[1]: Starting OpenSSH server daemon...
Oct 10 22:42:21 np0005480824 systemd[1]: Starting Permit User Sessions...
Oct 10 22:42:21 np0005480824 systemd[1]: Started Notify NFS peers of a restart.
Oct 10 22:42:21 np0005480824 systemd[1]: Finished Permit User Sessions.
Oct 10 22:42:21 np0005480824 systemd[1]: Started Command Scheduler.
Oct 10 22:42:21 np0005480824 systemd[1]: Started Getty on tty1.
Oct 10 22:42:21 np0005480824 systemd[1]: Started Serial Getty on ttyS0.
Oct 10 22:42:21 np0005480824 systemd[1]: Reached target Login Prompts.
Oct 10 22:42:21 np0005480824 systemd[1]: Started OpenSSH server daemon.
Oct 10 22:42:22 np0005480824 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Oct 10 22:42:22 np0005480824 rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 10 22:42:22 np0005480824 systemd[1]: Started System Logging Service.
Oct 10 22:42:22 np0005480824 systemd[1]: Reached target Multi-User System.
Oct 10 22:42:22 np0005480824 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 10 22:42:22 np0005480824 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 10 22:42:22 np0005480824 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 10 22:42:22 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 22:42:22 np0005480824 cloud-init[1030]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 11 Oct 2025 02:42:22 +0000. Up 9.93 seconds.
Oct 10 22:42:22 np0005480824 systemd[1]: Finished Cloud-init: Config Stage.
Oct 10 22:42:22 np0005480824 systemd[1]: Starting Cloud-init: Final Stage...
Oct 10 22:42:22 np0005480824 cloud-init[1038]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 11 Oct 2025 02:42:22 +0000. Up 10.36 seconds.
Oct 10 22:42:22 np0005480824 cloud-init[1040]: #############################################################
Oct 10 22:42:22 np0005480824 cloud-init[1041]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 10 22:42:22 np0005480824 cloud-init[1043]: 256 SHA256:lRO68rLHgazxvmSBg/d32yEfbOwjw1t/UoM185O5eNY root@np0005480824.novalocal (ECDSA)
Oct 10 22:42:22 np0005480824 cloud-init[1045]: 256 SHA256:6ldOyiwpPVjGT1DeqXhNrz6iHqETStj+8TQKcNO/BW4 root@np0005480824.novalocal (ED25519)
Oct 10 22:42:22 np0005480824 cloud-init[1047]: 3072 SHA256:u9rn5EBNw47tekeG5PnFUh5Z6er4SlbuUmCn5xMIW3I root@np0005480824.novalocal (RSA)
Oct 10 22:42:22 np0005480824 cloud-init[1048]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 10 22:42:22 np0005480824 cloud-init[1049]: #############################################################
Oct 10 22:42:22 np0005480824 cloud-init[1038]: Cloud-init v. 24.4-7.el9 finished at Sat, 11 Oct 2025 02:42:22 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.60 seconds
Oct 10 22:42:22 np0005480824 systemd[1]: Finished Cloud-init: Final Stage.
Oct 10 22:42:22 np0005480824 systemd[1]: Reached target Cloud-init target.
Oct 10 22:42:22 np0005480824 systemd[1]: Startup finished in 1.787s (kernel) + 2.932s (initrd) + 5.961s (userspace) = 10.681s.
Oct 10 22:42:24 np0005480824 chronyd[794]: Selected source 206.108.0.133 (2.centos.pool.ntp.org)
Oct 10 22:42:24 np0005480824 chronyd[794]: System clock TAI offset set to 37 seconds
Oct 10 22:42:29 np0005480824 irqbalance[777]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 10 22:42:29 np0005480824 irqbalance[777]: IRQ 25 affinity is now unmanaged
Oct 10 22:42:29 np0005480824 irqbalance[777]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 10 22:42:29 np0005480824 irqbalance[777]: IRQ 31 affinity is now unmanaged
Oct 10 22:42:29 np0005480824 irqbalance[777]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 10 22:42:29 np0005480824 irqbalance[777]: IRQ 28 affinity is now unmanaged
Oct 10 22:42:29 np0005480824 irqbalance[777]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 10 22:42:29 np0005480824 irqbalance[777]: IRQ 32 affinity is now unmanaged
Oct 10 22:42:29 np0005480824 irqbalance[777]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 10 22:42:29 np0005480824 irqbalance[777]: IRQ 30 affinity is now unmanaged
Oct 10 22:42:29 np0005480824 irqbalance[777]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 10 22:42:29 np0005480824 irqbalance[777]: IRQ 29 affinity is now unmanaged
Oct 10 22:42:30 np0005480824 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 22:42:35 np0005480824 systemd[1]: Created slice User Slice of UID 1000.
Oct 10 22:42:35 np0005480824 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 10 22:42:35 np0005480824 systemd-logind[782]: New session 1 of user zuul.
Oct 10 22:42:35 np0005480824 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 10 22:42:35 np0005480824 systemd[1]: Starting User Manager for UID 1000...
Oct 10 22:42:36 np0005480824 systemd[1057]: Queued start job for default target Main User Target.
Oct 10 22:42:36 np0005480824 systemd[1057]: Created slice User Application Slice.
Oct 10 22:42:36 np0005480824 systemd[1057]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 10 22:42:36 np0005480824 systemd[1057]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 22:42:36 np0005480824 systemd[1057]: Reached target Paths.
Oct 10 22:42:36 np0005480824 systemd[1057]: Reached target Timers.
Oct 10 22:42:36 np0005480824 systemd[1057]: Starting D-Bus User Message Bus Socket...
Oct 10 22:42:36 np0005480824 systemd[1057]: Starting Create User's Volatile Files and Directories...
Oct 10 22:42:36 np0005480824 systemd[1057]: Listening on D-Bus User Message Bus Socket.
Oct 10 22:42:36 np0005480824 systemd[1057]: Reached target Sockets.
Oct 10 22:42:36 np0005480824 systemd[1057]: Finished Create User's Volatile Files and Directories.
Oct 10 22:42:36 np0005480824 systemd[1057]: Reached target Basic System.
Oct 10 22:42:36 np0005480824 systemd[1057]: Reached target Main User Target.
Oct 10 22:42:36 np0005480824 systemd[1057]: Startup finished in 152ms.
Oct 10 22:42:36 np0005480824 systemd[1]: Started User Manager for UID 1000.
Oct 10 22:42:36 np0005480824 systemd[1]: Started Session 1 of User zuul.
Oct 10 22:42:36 np0005480824 python3[1139]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 22:42:39 np0005480824 python3[1167]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 22:42:45 np0005480824 python3[1225]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 22:42:46 np0005480824 python3[1265]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 10 22:42:48 np0005480824 python3[1291]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnVCWl0H6Lqbx262iv3wXho3BN6Z5IoZwZlZxjwe+t48UE9qSdplC/08Wwo1IL+p1DP1VBeEtr/CaTibW6ZdJkJSA2qdCUCei4jkFxPu7Bt+YyrQR3CL6P0LEDi+XnCCYMipEEX0VQa+efZ1qDL02L2yvzDG1r+nPU9roxToomtdS+RZwOMJH7i3m7eIaDp6eNTz8DIVNikawwCMM7ocEmrgLRtIhHtn1+OyjuMz2fHLCBe/rfmZqNTu1NNIxPWO+05X065wvzP7pNYqkxTnrz3vNMGb1TWQSnJhD1uUHotKabN8nXTgJ9K60JCJuMWQpAlVu3XAPeFvZ9znzKiDye+c/nvH1nL508j20B3s/qX3/EPM+qiCAqseHPVJNAd4q5JvE8mXXBy3s1bwt50usAXnGVLWuMNnJtWb6RvBV83+jRn5/7dgmYvBD6Esmh6usZ91T6/1VjOvFFIIk2ULVPAcMsFtBBDeA7pb1YgV/mX9E39iagONGFu21I8rH7+c0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:42:48 np0005480824 python3[1315]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:42:49 np0005480824 python3[1414]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:42:49 np0005480824 python3[1485]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760150568.7078645-207-163193105558955/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=c92ba3a92f374d2c8614455dee1c3c8f_id_rsa follow=False checksum=aed2ab122f05d751234b4f25ca9859dc0f60460e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:42:50 np0005480824 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 22:42:50 np0005480824 python3[1608]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:42:50 np0005480824 python3[1681]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760150569.754404-240-115085984542014/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=c92ba3a92f374d2c8614455dee1c3c8f_id_rsa.pub follow=False checksum=8245b5ac4f7c20a96bfbd218c4f50f7090e2ff3d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:42:51 np0005480824 python3[1729]: ansible-ping Invoked with data=pong
Oct 10 22:42:52 np0005480824 python3[1753]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 22:42:55 np0005480824 python3[1811]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 10 22:42:56 np0005480824 python3[1843]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:42:56 np0005480824 python3[1867]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:42:56 np0005480824 python3[1891]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:42:57 np0005480824 python3[1915]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:42:57 np0005480824 python3[1939]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:42:57 np0005480824 python3[1963]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:42:59 np0005480824 python3[1989]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:43:00 np0005480824 python3[2067]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:43:00 np0005480824 python3[2140]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760150579.5805163-21-187150855249049/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:43:01 np0005480824 python3[2188]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:01 np0005480824 python3[2212]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:01 np0005480824 python3[2236]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:02 np0005480824 python3[2260]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:02 np0005480824 python3[2284]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:02 np0005480824 python3[2308]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:02 np0005480824 python3[2332]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:03 np0005480824 python3[2356]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:03 np0005480824 python3[2380]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:03 np0005480824 python3[2404]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:03 np0005480824 python3[2428]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:04 np0005480824 python3[2452]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:04 np0005480824 python3[2476]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:04 np0005480824 python3[2500]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:05 np0005480824 python3[2524]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:05 np0005480824 python3[2548]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:05 np0005480824 python3[2572]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:06 np0005480824 python3[2596]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:06 np0005480824 python3[2620]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:06 np0005480824 python3[2644]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:07 np0005480824 python3[2668]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:07 np0005480824 python3[2692]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:07 np0005480824 python3[2716]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:07 np0005480824 python3[2740]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:08 np0005480824 python3[2764]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:08 np0005480824 python3[2788]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:43:11 np0005480824 python3[2814]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 10 22:43:11 np0005480824 systemd[1]: Starting Time & Date Service...
Oct 10 22:43:11 np0005480824 systemd[1]: Started Time & Date Service.
Oct 10 22:43:11 np0005480824 systemd-timedated[2816]: Changed time zone to 'UTC' (UTC).
Oct 10 22:43:11 np0005480824 python3[2845]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:43:12 np0005480824 python3[2921]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:43:12 np0005480824 python3[2992]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760150592.0368805-153-123831272235727/source _original_basename=tmpgz3mpgv0 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:43:13 np0005480824 python3[3092]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:43:13 np0005480824 python3[3163]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760150593.0949028-183-260851151964145/source _original_basename=tmpfgawd8da follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:43:14 np0005480824 python3[3265]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:43:15 np0005480824 python3[3338]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760150594.2512677-231-120036990184639/source _original_basename=tmpowyt2hgx follow=False checksum=557d5f43a34c3451c3da50c84aa3111109587fd7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:43:15 np0005480824 python3[3386]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 22:43:15 np0005480824 python3[3412]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 22:43:16 np0005480824 python3[3492]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:43:16 np0005480824 python3[3565]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760150596.1385067-273-202854228736920/source _original_basename=tmpsqy2n6zd follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:43:17 np0005480824 python3[3616]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-1888-a192-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 22:43:18 np0005480824 python3[3644]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-1888-a192-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 10 22:43:19 np0005480824 python3[3673]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:43:39 np0005480824 python3[3699]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:43:41 np0005480824 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 22:44:15 np0005480824 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 10 22:44:15 np0005480824 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 10 22:44:15 np0005480824 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 10 22:44:15 np0005480824 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 10 22:44:15 np0005480824 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 10 22:44:15 np0005480824 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 10 22:44:15 np0005480824 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 10 22:44:15 np0005480824 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 10 22:44:15 np0005480824 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 10 22:44:15 np0005480824 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 10 22:44:15 np0005480824 NetworkManager[856]: <info>  [1760150655.3977] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 10 22:44:15 np0005480824 systemd-udevd[3702]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 22:44:15 np0005480824 NetworkManager[856]: <info>  [1760150655.4145] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 22:44:15 np0005480824 NetworkManager[856]: <info>  [1760150655.4174] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 10 22:44:15 np0005480824 NetworkManager[856]: <info>  [1760150655.4178] device (eth1): carrier: link connected
Oct 10 22:44:15 np0005480824 NetworkManager[856]: <info>  [1760150655.4181] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 10 22:44:15 np0005480824 NetworkManager[856]: <info>  [1760150655.4187] policy: auto-activating connection 'Wired connection 1' (a34c4c6a-453d-333f-83bf-fb28f9879097)
Oct 10 22:44:15 np0005480824 NetworkManager[856]: <info>  [1760150655.4192] device (eth1): Activation: starting connection 'Wired connection 1' (a34c4c6a-453d-333f-83bf-fb28f9879097)
Oct 10 22:44:15 np0005480824 NetworkManager[856]: <info>  [1760150655.4193] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 22:44:15 np0005480824 NetworkManager[856]: <info>  [1760150655.4197] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 22:44:15 np0005480824 NetworkManager[856]: <info>  [1760150655.4203] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 22:44:15 np0005480824 NetworkManager[856]: <info>  [1760150655.4209] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 10 22:44:16 np0005480824 python3[3729]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-77c4-0d58-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 22:44:26 np0005480824 python3[3809]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:44:26 np0005480824 python3[3882]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760150666.0392659-102-21671268311642/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=0587e90f7bddcacd526af6bbac81b02ec4ed4c46 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:44:27 np0005480824 python3[3932]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 22:44:27 np0005480824 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 10 22:44:27 np0005480824 systemd[1]: Stopped Network Manager Wait Online.
Oct 10 22:44:27 np0005480824 systemd[1]: Stopping Network Manager Wait Online...
Oct 10 22:44:27 np0005480824 NetworkManager[856]: <info>  [1760150667.7190] caught SIGTERM, shutting down normally.
Oct 10 22:44:27 np0005480824 systemd[1]: Stopping Network Manager...
Oct 10 22:44:27 np0005480824 NetworkManager[856]: <info>  [1760150667.7195] dhcp4 (eth0): canceled DHCP transaction
Oct 10 22:44:27 np0005480824 NetworkManager[856]: <info>  [1760150667.7196] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 22:44:27 np0005480824 NetworkManager[856]: <info>  [1760150667.7196] dhcp4 (eth0): state changed no lease
Oct 10 22:44:27 np0005480824 NetworkManager[856]: <info>  [1760150667.7197] manager: NetworkManager state is now CONNECTING
Oct 10 22:44:27 np0005480824 NetworkManager[856]: <info>  [1760150667.7256] dhcp4 (eth1): canceled DHCP transaction
Oct 10 22:44:27 np0005480824 NetworkManager[856]: <info>  [1760150667.7256] dhcp4 (eth1): state changed no lease
Oct 10 22:44:27 np0005480824 NetworkManager[856]: <info>  [1760150667.7299] exiting (success)
Oct 10 22:44:27 np0005480824 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 22:44:27 np0005480824 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 10 22:44:27 np0005480824 systemd[1]: Stopped Network Manager.
Oct 10 22:44:27 np0005480824 systemd[1]: Starting Network Manager...
Oct 10 22:44:27 np0005480824 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.7991] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:37201e2b-068e-4723-93b2-e25c9bbc9f0f)
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.7993] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.8084] manager[0x5568c1e23070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 10 22:44:27 np0005480824 systemd[1]: Starting Hostname Service...
Oct 10 22:44:27 np0005480824 systemd[1]: Started Hostname Service.
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9314] hostname: hostname: using hostnamed
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9315] hostname: static hostname changed from (none) to "np0005480824.novalocal"
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9323] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9331] manager[0x5568c1e23070]: rfkill: Wi-Fi hardware radio set enabled
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9331] manager[0x5568c1e23070]: rfkill: WWAN hardware radio set enabled
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9379] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9380] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9381] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9381] manager: Networking is enabled by state file
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9385] settings: Loaded settings plugin: keyfile (internal)
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9391] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9436] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9452] dhcp: init: Using DHCP client 'internal'
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9457] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9465] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9475] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9489] device (lo): Activation: starting connection 'lo' (0021d04c-82f9-4da3-814c-50b07db9d2ee)
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9499] device (eth0): carrier: link connected
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9507] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9514] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9514] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9526] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9537] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9548] device (eth1): carrier: link connected
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9557] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9564] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (a34c4c6a-453d-333f-83bf-fb28f9879097) (indicated)
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9565] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9574] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9584] device (eth1): Activation: starting connection 'Wired connection 1' (a34c4c6a-453d-333f-83bf-fb28f9879097)
Oct 10 22:44:27 np0005480824 systemd[1]: Started Network Manager.
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9592] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9604] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9608] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9611] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9614] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9618] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9621] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9625] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9629] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9641] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9645] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9658] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9662] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9684] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9692] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9700] device (lo): Activation: successful, device activated.
Oct 10 22:44:27 np0005480824 systemd[1]: Starting Network Manager Wait Online...
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9709] dhcp4 (eth0): state changed new lease, address=38.102.83.68
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9718] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9799] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9828] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9831] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9835] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9843] device (eth0): Activation: successful, device activated.
Oct 10 22:44:27 np0005480824 NetworkManager[3939]: <info>  [1760150667.9852] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 10 22:44:28 np0005480824 python3[4016]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-77c4-0d58-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 22:44:38 np0005480824 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 22:44:54 np0005480824 systemd[1057]: Starting Mark boot as successful...
Oct 10 22:44:54 np0005480824 systemd[1057]: Finished Mark boot as successful.
Oct 10 22:44:57 np0005480824 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.2876] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 22:45:13 np0005480824 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 22:45:13 np0005480824 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3177] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3181] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3193] device (eth1): Activation: successful, device activated.
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3205] manager: startup complete
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3208] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <warn>  [1760150713.3216] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3228] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 10 22:45:13 np0005480824 systemd[1]: Finished Network Manager Wait Online.
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3357] dhcp4 (eth1): canceled DHCP transaction
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3357] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3358] dhcp4 (eth1): state changed no lease
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3380] policy: auto-activating connection 'ci-private-network' (0e05ba61-7743-5a06-87ca-88ddda9b0d31)
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3386] device (eth1): Activation: starting connection 'ci-private-network' (0e05ba61-7743-5a06-87ca-88ddda9b0d31)
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3388] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3391] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3402] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3414] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3457] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3459] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 22:45:13 np0005480824 NetworkManager[3939]: <info>  [1760150713.3468] device (eth1): Activation: successful, device activated.
Oct 10 22:45:23 np0005480824 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 22:45:27 np0005480824 python3[4122]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:45:28 np0005480824 python3[4195]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760150727.676546-267-154807508792448/source _original_basename=tmp304vlwku follow=False checksum=2553e50585cc5f9697aa07838c8832962da24a3c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:46:28 np0005480824 systemd-logind[782]: Session 1 logged out. Waiting for processes to exit.
Oct 10 22:47:54 np0005480824 systemd[1057]: Created slice User Background Tasks Slice.
Oct 10 22:47:54 np0005480824 systemd[1057]: Starting Cleanup of User's Temporary Files and Directories...
Oct 10 22:47:54 np0005480824 systemd[1057]: Finished Cleanup of User's Temporary Files and Directories.
Oct 10 22:51:03 np0005480824 systemd-logind[782]: New session 3 of user zuul.
Oct 10 22:51:03 np0005480824 systemd[1]: Started Session 3 of User zuul.
Oct 10 22:51:03 np0005480824 python3[4255]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-7cb7-e266-000000001ce8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 22:51:04 np0005480824 python3[4283]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:51:04 np0005480824 python3[4309]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:51:04 np0005480824 python3[4336]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:51:04 np0005480824 python3[4362]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:51:05 np0005480824 python3[4388]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:51:05 np0005480824 python3[4388]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 10 22:51:06 np0005480824 python3[4414]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 22:51:06 np0005480824 systemd[1]: Reloading.
Oct 10 22:51:06 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 22:51:07 np0005480824 python3[4470]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 10 22:51:08 np0005480824 python3[4496]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 22:51:08 np0005480824 python3[4524]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 22:51:08 np0005480824 python3[4552]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 22:51:08 np0005480824 python3[4580]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 22:51:09 np0005480824 python3[4607]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-7cb7-e266-000000001cee-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 22:51:10 np0005480824 python3[4637]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 22:51:12 np0005480824 systemd[1]: session-3.scope: Deactivated successfully.
Oct 10 22:51:12 np0005480824 systemd[1]: session-3.scope: Consumed 3.565s CPU time.
Oct 10 22:51:12 np0005480824 systemd-logind[782]: Session 3 logged out. Waiting for processes to exit.
Oct 10 22:51:12 np0005480824 systemd-logind[782]: Removed session 3.
Oct 10 22:51:13 np0005480824 systemd-logind[782]: New session 4 of user zuul.
Oct 10 22:51:13 np0005480824 systemd[1]: Started Session 4 of User zuul.
Oct 10 22:51:14 np0005480824 python3[4671]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 10 22:51:28 np0005480824 kernel: SELinux:  Converting 363 SID table entries...
Oct 10 22:51:28 np0005480824 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 22:51:28 np0005480824 kernel: SELinux:  policy capability open_perms=1
Oct 10 22:51:28 np0005480824 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 22:51:28 np0005480824 kernel: SELinux:  policy capability always_check_network=0
Oct 10 22:51:28 np0005480824 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 22:51:28 np0005480824 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 22:51:28 np0005480824 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 22:51:37 np0005480824 kernel: SELinux:  Converting 363 SID table entries...
Oct 10 22:51:37 np0005480824 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 22:51:37 np0005480824 kernel: SELinux:  policy capability open_perms=1
Oct 10 22:51:37 np0005480824 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 22:51:37 np0005480824 kernel: SELinux:  policy capability always_check_network=0
Oct 10 22:51:37 np0005480824 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 22:51:37 np0005480824 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 22:51:37 np0005480824 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 22:51:45 np0005480824 kernel: SELinux:  Converting 363 SID table entries...
Oct 10 22:51:45 np0005480824 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 22:51:45 np0005480824 kernel: SELinux:  policy capability open_perms=1
Oct 10 22:51:45 np0005480824 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 22:51:45 np0005480824 kernel: SELinux:  policy capability always_check_network=0
Oct 10 22:51:45 np0005480824 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 22:51:45 np0005480824 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 22:51:45 np0005480824 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 22:51:47 np0005480824 setsebool[4738]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 10 22:51:47 np0005480824 setsebool[4738]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 10 22:51:57 np0005480824 kernel: SELinux:  Converting 366 SID table entries...
Oct 10 22:51:57 np0005480824 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 22:51:57 np0005480824 kernel: SELinux:  policy capability open_perms=1
Oct 10 22:51:57 np0005480824 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 22:51:57 np0005480824 kernel: SELinux:  policy capability always_check_network=0
Oct 10 22:51:57 np0005480824 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 22:51:57 np0005480824 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 22:51:57 np0005480824 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 22:52:16 np0005480824 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 10 22:52:16 np0005480824 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 22:52:16 np0005480824 systemd[1]: Starting man-db-cache-update.service...
Oct 10 22:52:16 np0005480824 systemd[1]: Reloading.
Oct 10 22:52:16 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 22:52:16 np0005480824 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 22:52:17 np0005480824 systemd[1]: Starting PackageKit Daemon...
Oct 10 22:52:17 np0005480824 systemd[1]: Starting Authorization Manager...
Oct 10 22:52:17 np0005480824 polkitd[6205]: Started polkitd version 0.117
Oct 10 22:52:17 np0005480824 systemd[1]: Started Authorization Manager.
Oct 10 22:52:17 np0005480824 systemd[1]: Started PackageKit Daemon.
Oct 10 22:52:19 np0005480824 python3[8264]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-7563-fc3b-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 22:52:20 np0005480824 kernel: evm: overlay not supported
Oct 10 22:52:20 np0005480824 systemd[1057]: Starting D-Bus User Message Bus...
Oct 10 22:52:20 np0005480824 dbus-broker-launch[9030]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 10 22:52:20 np0005480824 dbus-broker-launch[9030]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 10 22:52:20 np0005480824 systemd[1057]: Started D-Bus User Message Bus.
Oct 10 22:52:20 np0005480824 dbus-broker-lau[9030]: Ready
Oct 10 22:52:20 np0005480824 systemd[1057]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 10 22:52:20 np0005480824 systemd[1057]: Created slice Slice /user.
Oct 10 22:52:20 np0005480824 systemd[1057]: podman-8910.scope: unit configures an IP firewall, but not running as root.
Oct 10 22:52:20 np0005480824 systemd[1057]: (This warning is only shown for the first unit using IP firewalling.)
Oct 10 22:52:20 np0005480824 systemd[1057]: Started podman-8910.scope.
Oct 10 22:52:20 np0005480824 systemd[1057]: Started podman-pause-31202ca3.scope.
Oct 10 22:52:21 np0005480824 python3[9475]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.13:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.13:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:52:21 np0005480824 systemd[1]: session-4.scope: Deactivated successfully.
Oct 10 22:52:21 np0005480824 systemd[1]: session-4.scope: Consumed 59.302s CPU time.
Oct 10 22:52:21 np0005480824 systemd-logind[782]: Session 4 logged out. Waiting for processes to exit.
Oct 10 22:52:21 np0005480824 systemd-logind[782]: Removed session 4.
Oct 10 22:52:45 np0005480824 systemd-logind[782]: New session 5 of user zuul.
Oct 10 22:52:45 np0005480824 systemd[1]: Started Session 5 of User zuul.
Oct 10 22:52:45 np0005480824 python3[18484]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOGELbCGdrJwaAHzlQbwjRTFJQiZz6ZvniVkdsRNCBt0G/G41zB/ARu/iUkBixuCwUue5JQRsmeeN4+2bMsvmp4= zuul@np0005480823.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:52:46 np0005480824 python3[18642]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOGELbCGdrJwaAHzlQbwjRTFJQiZz6ZvniVkdsRNCBt0G/G41zB/ARu/iUkBixuCwUue5JQRsmeeN4+2bMsvmp4= zuul@np0005480823.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:52:47 np0005480824 python3[18941]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005480824.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 10 22:52:47 np0005480824 python3[19126]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOGELbCGdrJwaAHzlQbwjRTFJQiZz6ZvniVkdsRNCBt0G/G41zB/ARu/iUkBixuCwUue5JQRsmeeN4+2bMsvmp4= zuul@np0005480823.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 22:52:48 np0005480824 python3[19365]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:52:48 np0005480824 python3[19590]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760151167.8137426-135-125794476837543/source _original_basename=tmpg0kvvagq follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:52:49 np0005480824 python3[19862]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct 10 22:52:49 np0005480824 systemd[1]: Starting Hostname Service...
Oct 10 22:52:49 np0005480824 systemd[1]: Started Hostname Service.
Oct 10 22:52:49 np0005480824 systemd-hostnamed[19940]: Changed pretty hostname to 'compute-0'
Oct 10 22:52:49 np0005480824 systemd-hostnamed[19940]: Hostname set to <compute-0> (static)
Oct 10 22:52:49 np0005480824 NetworkManager[3939]: <info>  [1760151169.6386] hostname: static hostname changed from "np0005480824.novalocal" to "compute-0"
Oct 10 22:52:49 np0005480824 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 22:52:49 np0005480824 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 22:52:50 np0005480824 systemd[1]: session-5.scope: Deactivated successfully.
Oct 10 22:52:50 np0005480824 systemd[1]: session-5.scope: Consumed 2.702s CPU time.
Oct 10 22:52:50 np0005480824 systemd-logind[782]: Session 5 logged out. Waiting for processes to exit.
Oct 10 22:52:50 np0005480824 systemd-logind[782]: Removed session 5.
Oct 10 22:52:59 np0005480824 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 22:53:09 np0005480824 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 22:53:09 np0005480824 systemd[1]: Finished man-db-cache-update.service.
Oct 10 22:53:09 np0005480824 systemd[1]: man-db-cache-update.service: Consumed 1min 4.794s CPU time.
Oct 10 22:53:09 np0005480824 systemd[1]: run-rfe0e809553b44f8180b5be7cebcf480e.service: Deactivated successfully.
Oct 10 22:53:09 np0005480824 irqbalance[777]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 10 22:53:09 np0005480824 irqbalance[777]: IRQ 27 affinity is now unmanaged
Oct 10 22:53:19 np0005480824 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 22:56:23 np0005480824 systemd-logind[782]: New session 6 of user zuul.
Oct 10 22:56:23 np0005480824 systemd[1]: Started Session 6 of User zuul.
Oct 10 22:56:24 np0005480824 python3[26615]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 22:56:25 np0005480824 python3[26731]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:56:26 np0005480824 python3[26804]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760151385.4572206-30177-161314899116281/source mode=0755 _original_basename=delorean.repo follow=False checksum=f3fabc627b4c59ab3d10213193ffdeeed080e354 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:56:26 np0005480824 python3[26830]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:56:26 np0005480824 python3[26903]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760151385.4572206-30177-161314899116281/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:56:27 np0005480824 python3[26929]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:56:27 np0005480824 python3[27002]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760151385.4572206-30177-161314899116281/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:56:27 np0005480824 python3[27028]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:56:28 np0005480824 python3[27101]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760151385.4572206-30177-161314899116281/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:56:28 np0005480824 python3[27127]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:56:28 np0005480824 python3[27200]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760151385.4572206-30177-161314899116281/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:56:29 np0005480824 python3[27226]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:56:29 np0005480824 python3[27299]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760151385.4572206-30177-161314899116281/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:56:29 np0005480824 python3[27325]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 22:56:30 np0005480824 python3[27398]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760151385.4572206-30177-161314899116281/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=5e44558a2b46929660a6b5bfc8824fb4521580a4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 22:56:41 np0005480824 python3[27456]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 22:57:22 np0005480824 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 10 22:57:22 np0005480824 systemd[1]: packagekit.service: Deactivated successfully.
Oct 10 22:57:22 np0005480824 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 10 22:57:22 np0005480824 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 10 22:57:22 np0005480824 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 10 23:01:40 np0005480824 systemd[1]: session-6.scope: Deactivated successfully.
Oct 10 23:01:40 np0005480824 systemd[1]: session-6.scope: Consumed 5.420s CPU time.
Oct 10 23:01:40 np0005480824 systemd-logind[782]: Session 6 logged out. Waiting for processes to exit.
Oct 10 23:01:40 np0005480824 systemd-logind[782]: Removed session 6.
Oct 10 23:05:54 np0005480824 systemd[1]: Starting dnf makecache...
Oct 10 23:05:55 np0005480824 dnf[27481]: Failed determining last makecache time.
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-openstack-barbican-42b4c41831408a8e323 423 kB/s |  13 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 2.9 MB/s |  65 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.8 MB/s |  32 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-python-stevedore-c4acc5639fd2329372142 6.4 MB/s | 131 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-python-observabilityclient-2f31846d73c 1.3 MB/s |  25 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-diskimage-builder-7d793e664cf892461c55  12 MB/s | 356 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 1.2 MB/s |  42 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-python-designate-tests-tempest-347fdbc 914 kB/s |  18 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-openstack-glance-1fd12c29b339f30fe823e 940 kB/s |  18 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.4 MB/s |  29 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-openstack-manila-3c01b7181572c95dac462 1.2 MB/s |  25 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-python-vmware-nsxlib-458234972d1428ac9 6.9 MB/s | 154 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-openstack-octavia-ba397f07a7331190208c 1.3 MB/s |  26 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-openstack-watcher-c014f81a8647287f6dcc 779 kB/s |  16 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-python-tcib-ff70d03bf5bc0bb6f3540a02d3 415 kB/s | 7.4 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-puppet-ceph-91ba84bc002c318a7f961d084e 7.0 MB/s | 144 kB     00:00
Oct 10 23:05:55 np0005480824 dnf[27481]: delorean-openstack-swift-dc98a8463506ac520c469a 141 kB/s |  14 kB     00:00
Oct 10 23:05:56 np0005480824 dnf[27481]: delorean-python-tempestconf-8515371b7cceebd4282 947 kB/s |  53 kB     00:00
Oct 10 23:05:56 np0005480824 dnf[27481]: delorean-openstack-heat-ui-013accbfd179753bc3f0 4.5 MB/s |  96 kB     00:00
Oct 10 23:05:56 np0005480824 dnf[27481]: CentOS Stream 9 - BaseOS                         22 kB/s | 6.7 kB     00:00
Oct 10 23:05:56 np0005480824 dnf[27481]: CentOS Stream 9 - AppStream                      28 kB/s | 6.8 kB     00:00
Oct 10 23:05:57 np0005480824 dnf[27481]: CentOS Stream 9 - CRB                            70 kB/s | 6.6 kB     00:00
Oct 10 23:05:57 np0005480824 dnf[27481]: CentOS Stream 9 - Extras packages                71 kB/s | 8.0 kB     00:00
Oct 10 23:05:57 np0005480824 dnf[27481]: dlrn-antelope-testing                            29 MB/s | 1.1 MB     00:00
Oct 10 23:05:57 np0005480824 dnf[27481]: dlrn-antelope-build-deps                         16 MB/s | 461 kB     00:00
Oct 10 23:05:57 np0005480824 dnf[27481]: centos9-rabbitmq                                 11 MB/s | 123 kB     00:00
Oct 10 23:05:57 np0005480824 dnf[27481]: centos9-storage                                  21 MB/s | 415 kB     00:00
Oct 10 23:05:58 np0005480824 dnf[27481]: centos9-opstools                                4.7 MB/s |  51 kB     00:00
Oct 10 23:05:58 np0005480824 dnf[27481]: NFV SIG OpenvSwitch                              25 MB/s | 449 kB     00:00
Oct 10 23:05:58 np0005480824 dnf[27481]: repo-setup-centos-appstream                     130 MB/s |  25 MB     00:00
Oct 10 23:06:04 np0005480824 dnf[27481]: repo-setup-centos-baseos                         77 MB/s | 8.8 MB     00:00
Oct 10 23:06:05 np0005480824 dnf[27481]: repo-setup-centos-highavailability               31 MB/s | 744 kB     00:00
Oct 10 23:06:05 np0005480824 dnf[27481]: repo-setup-centos-powertools                     91 MB/s | 7.2 MB     00:00
Oct 10 23:06:08 np0005480824 dnf[27481]: Extra Packages for Enterprise Linux 9 - x86_64   18 MB/s |  20 MB     00:01
Oct 10 23:06:20 np0005480824 dnf[27481]: Metadata cache created.
Oct 10 23:06:20 np0005480824 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 10 23:06:20 np0005480824 systemd[1]: Finished dnf makecache.
Oct 10 23:06:20 np0005480824 systemd[1]: dnf-makecache.service: Consumed 23.417s CPU time.
Oct 10 23:07:28 np0005480824 systemd-logind[782]: New session 7 of user zuul.
Oct 10 23:07:28 np0005480824 systemd[1]: Started Session 7 of User zuul.
Oct 10 23:07:29 np0005480824 python3.9[27739]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:07:30 np0005480824 python3.9[27920]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:07:39 np0005480824 systemd[1]: session-7.scope: Deactivated successfully.
Oct 10 23:07:39 np0005480824 systemd[1]: session-7.scope: Consumed 8.430s CPU time.
Oct 10 23:07:39 np0005480824 systemd-logind[782]: Session 7 logged out. Waiting for processes to exit.
Oct 10 23:07:39 np0005480824 systemd-logind[782]: Removed session 7.
Oct 10 23:07:54 np0005480824 systemd-logind[782]: New session 8 of user zuul.
Oct 10 23:07:54 np0005480824 systemd[1]: Started Session 8 of User zuul.
Oct 10 23:07:55 np0005480824 python3.9[28132]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 10 23:07:56 np0005480824 python3.9[28306]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:07:57 np0005480824 python3.9[28458]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:07:58 np0005480824 python3.9[28611]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:07:59 np0005480824 python3.9[28763]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:08:00 np0005480824 python3.9[28915]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:08:01 np0005480824 python3.9[29038]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760152080.1258519-73-169133135348671/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:08:02 np0005480824 python3.9[29190]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:08:03 np0005480824 python3.9[29346]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:08:04 np0005480824 python3.9[29496]: ansible-ansible.builtin.service_facts Invoked
Oct 10 23:08:08 np0005480824 python3.9[29751]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:08:09 np0005480824 python3.9[29901]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:08:09 np0005480824 irqbalance[777]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 10 23:08:09 np0005480824 irqbalance[777]: IRQ 26 affinity is now unmanaged
Oct 10 23:08:10 np0005480824 python3.9[30055]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:08:11 np0005480824 python3.9[30213]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:08:12 np0005480824 python3.9[30297]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:08:56 np0005480824 systemd[1]: Reloading.
Oct 10 23:08:56 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:08:56 np0005480824 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 10 23:08:57 np0005480824 systemd[1]: Reloading.
Oct 10 23:08:57 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:08:57 np0005480824 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 10 23:08:57 np0005480824 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 10 23:08:57 np0005480824 systemd[1]: Reloading.
Oct 10 23:08:57 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:08:58 np0005480824 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 10 23:08:58 np0005480824 dbus-broker-launch[738]: Noticed file-system modification, trigger reload.
Oct 10 23:08:58 np0005480824 dbus-broker-launch[738]: Noticed file-system modification, trigger reload.
Oct 10 23:08:58 np0005480824 dbus-broker-launch[738]: Noticed file-system modification, trigger reload.
Oct 10 23:10:04 np0005480824 kernel: SELinux:  Converting 2715 SID table entries...
Oct 10 23:10:04 np0005480824 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 23:10:04 np0005480824 kernel: SELinux:  policy capability open_perms=1
Oct 10 23:10:04 np0005480824 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 23:10:04 np0005480824 kernel: SELinux:  policy capability always_check_network=0
Oct 10 23:10:04 np0005480824 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 23:10:04 np0005480824 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 23:10:04 np0005480824 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 23:10:04 np0005480824 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 10 23:10:04 np0005480824 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 23:10:04 np0005480824 systemd[1]: Starting man-db-cache-update.service...
Oct 10 23:10:04 np0005480824 systemd[1]: Reloading.
Oct 10 23:10:04 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:10:04 np0005480824 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 23:10:05 np0005480824 systemd[1]: Starting PackageKit Daemon...
Oct 10 23:10:05 np0005480824 systemd[1]: Started PackageKit Daemon.
Oct 10 23:10:06 np0005480824 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 23:10:06 np0005480824 systemd[1]: Finished man-db-cache-update.service.
Oct 10 23:10:06 np0005480824 systemd[1]: man-db-cache-update.service: Consumed 1.822s CPU time.
Oct 10 23:10:06 np0005480824 systemd[1]: run-rb717dd871caa4276979745be50b1bcf9.service: Deactivated successfully.
Oct 10 23:10:06 np0005480824 python3.9[31799]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:10:08 np0005480824 python3.9[32081]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 10 23:10:10 np0005480824 python3.9[32233]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 10 23:10:12 np0005480824 python3.9[32387]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:10:13 np0005480824 python3.9[32539]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 10 23:10:15 np0005480824 python3.9[32691]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:10:16 np0005480824 python3.9[32843]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:10:19 np0005480824 python3.9[32966]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760152215.5012538-227-276454687795560/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=014b1ef6a5f22a009f711144013b78e6d26cdf65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:10:20 np0005480824 python3.9[33119]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 10 23:10:21 np0005480824 python3.9[33272]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 23:10:21 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 23:10:23 np0005480824 python3.9[33431]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 23:10:23 np0005480824 python3.9[33591]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 10 23:10:24 np0005480824 python3.9[33744]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 23:10:25 np0005480824 python3.9[33902]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 10 23:10:26 np0005480824 python3.9[34054]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:10:28 np0005480824 python3.9[34207]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:10:29 np0005480824 python3.9[34359]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:10:30 np0005480824 python3.9[34482]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760152228.915811-322-57725189356539/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:10:31 np0005480824 python3.9[34634]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:10:31 np0005480824 systemd[1]: Starting Load Kernel Modules...
Oct 10 23:10:31 np0005480824 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 10 23:10:31 np0005480824 kernel: Bridge firewalling registered
Oct 10 23:10:31 np0005480824 systemd-modules-load[34638]: Inserted module 'br_netfilter'
Oct 10 23:10:31 np0005480824 systemd[1]: Finished Load Kernel Modules.
Oct 10 23:10:32 np0005480824 python3.9[34794]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:10:32 np0005480824 python3.9[34917]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760152231.5896776-345-237271191978718/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:10:33 np0005480824 python3.9[35069]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:10:37 np0005480824 dbus-broker-launch[738]: Noticed file-system modification, trigger reload.
Oct 10 23:10:37 np0005480824 dbus-broker-launch[738]: Noticed file-system modification, trigger reload.
Oct 10 23:10:37 np0005480824 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 23:10:37 np0005480824 systemd[1]: Starting man-db-cache-update.service...
Oct 10 23:10:37 np0005480824 systemd[1]: Reloading.
Oct 10 23:10:37 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:10:38 np0005480824 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 23:10:39 np0005480824 python3.9[36359]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:10:40 np0005480824 python3.9[37490]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 10 23:10:40 np0005480824 python3.9[38313]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:10:41 np0005480824 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 23:10:41 np0005480824 systemd[1]: Finished man-db-cache-update.service.
Oct 10 23:10:41 np0005480824 systemd[1]: man-db-cache-update.service: Consumed 4.496s CPU time.
Oct 10 23:10:41 np0005480824 systemd[1]: run-rc2b90477afbb488985eb5313e38d6fb4.service: Deactivated successfully.
Oct 10 23:10:41 np0005480824 python3.9[39232]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:10:41 np0005480824 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 10 23:10:42 np0005480824 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 10 23:10:43 np0005480824 python3.9[39614]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:10:44 np0005480824 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 10 23:10:44 np0005480824 systemd[1]: tuned.service: Deactivated successfully.
Oct 10 23:10:44 np0005480824 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 10 23:10:44 np0005480824 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 10 23:10:44 np0005480824 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 10 23:10:45 np0005480824 python3.9[39776]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 10 23:10:47 np0005480824 python3.9[39928]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:10:48 np0005480824 systemd[1]: Reloading.
Oct 10 23:10:48 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:10:49 np0005480824 python3.9[40117]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:10:49 np0005480824 systemd[1]: Reloading.
Oct 10 23:10:49 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:10:50 np0005480824 python3.9[40306]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:10:51 np0005480824 python3.9[40459]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:10:51 np0005480824 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 10 23:10:52 np0005480824 python3.9[40612]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:10:54 np0005480824 python3.9[40774]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:10:55 np0005480824 python3.9[40927]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:10:55 np0005480824 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 10 23:10:55 np0005480824 systemd[1]: Stopped Apply Kernel Variables.
Oct 10 23:10:55 np0005480824 systemd[1]: Stopping Apply Kernel Variables...
Oct 10 23:10:55 np0005480824 systemd[1]: Starting Apply Kernel Variables...
Oct 10 23:10:55 np0005480824 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 10 23:10:55 np0005480824 systemd[1]: Finished Apply Kernel Variables.
Oct 10 23:10:56 np0005480824 systemd[1]: session-8.scope: Deactivated successfully.
Oct 10 23:10:56 np0005480824 systemd[1]: session-8.scope: Consumed 2min 18.459s CPU time.
Oct 10 23:10:56 np0005480824 systemd-logind[782]: Session 8 logged out. Waiting for processes to exit.
Oct 10 23:10:56 np0005480824 systemd-logind[782]: Removed session 8.
Oct 10 23:11:01 np0005480824 systemd-logind[782]: New session 9 of user zuul.
Oct 10 23:11:01 np0005480824 systemd[1]: Started Session 9 of User zuul.
Oct 10 23:11:02 np0005480824 python3.9[41110]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:11:03 np0005480824 python3.9[41266]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 10 23:11:04 np0005480824 python3.9[41419]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 23:11:05 np0005480824 python3.9[41577]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 23:11:06 np0005480824 python3.9[41737]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:11:07 np0005480824 python3.9[41821]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 23:11:11 np0005480824 python3.9[41984]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:11:22 np0005480824 kernel: SELinux:  Converting 2725 SID table entries...
Oct 10 23:11:22 np0005480824 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 23:11:22 np0005480824 kernel: SELinux:  policy capability open_perms=1
Oct 10 23:11:22 np0005480824 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 23:11:22 np0005480824 kernel: SELinux:  policy capability always_check_network=0
Oct 10 23:11:22 np0005480824 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 23:11:22 np0005480824 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 23:11:22 np0005480824 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 23:11:22 np0005480824 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 10 23:11:22 np0005480824 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 10 23:11:23 np0005480824 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 23:11:24 np0005480824 systemd[1]: Starting man-db-cache-update.service...
Oct 10 23:11:24 np0005480824 systemd[1]: Reloading.
Oct 10 23:11:24 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:11:24 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:11:24 np0005480824 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 23:11:24 np0005480824 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 23:11:24 np0005480824 systemd[1]: Finished man-db-cache-update.service.
Oct 10 23:11:24 np0005480824 systemd[1]: run-rad4309a0b7c64f9b9e15ad441d5aa8ef.service: Deactivated successfully.
Oct 10 23:11:26 np0005480824 python3.9[43085]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 23:11:26 np0005480824 systemd[1]: Reloading.
Oct 10 23:11:26 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:11:26 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:11:26 np0005480824 systemd[1]: Starting Open vSwitch Database Unit...
Oct 10 23:11:26 np0005480824 chown[43127]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 10 23:11:26 np0005480824 ovs-ctl[43132]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 10 23:11:26 np0005480824 ovs-ctl[43132]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 10 23:11:26 np0005480824 ovs-ctl[43132]: Starting ovsdb-server [  OK  ]
Oct 10 23:11:26 np0005480824 ovs-vsctl[43181]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 10 23:11:26 np0005480824 ovs-vsctl[43200]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"14b06507-d00b-4e27-a47d-46a5c2644635\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 10 23:11:26 np0005480824 ovs-ctl[43132]: Configuring Open vSwitch system IDs [  OK  ]
Oct 10 23:11:26 np0005480824 ovs-ctl[43132]: Enabling remote OVSDB managers [  OK  ]
Oct 10 23:11:26 np0005480824 systemd[1]: Started Open vSwitch Database Unit.
Oct 10 23:11:26 np0005480824 ovs-vsctl[43206]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 10 23:11:26 np0005480824 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 10 23:11:26 np0005480824 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 10 23:11:26 np0005480824 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 10 23:11:27 np0005480824 kernel: openvswitch: Open vSwitch switching datapath
Oct 10 23:11:27 np0005480824 ovs-ctl[43251]: Inserting openvswitch module [  OK  ]
Oct 10 23:11:27 np0005480824 ovs-ctl[43220]: Starting ovs-vswitchd [  OK  ]
Oct 10 23:11:27 np0005480824 ovs-vsctl[43272]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 10 23:11:27 np0005480824 ovs-ctl[43220]: Enabling remote OVSDB managers [  OK  ]
Oct 10 23:11:27 np0005480824 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 10 23:11:27 np0005480824 systemd[1]: Starting Open vSwitch...
Oct 10 23:11:27 np0005480824 systemd[1]: Finished Open vSwitch.
Oct 10 23:11:28 np0005480824 python3.9[43424]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:11:29 np0005480824 python3.9[43576]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 10 23:11:30 np0005480824 kernel: SELinux:  Converting 2739 SID table entries...
Oct 10 23:11:30 np0005480824 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 23:11:30 np0005480824 kernel: SELinux:  policy capability open_perms=1
Oct 10 23:11:30 np0005480824 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 23:11:30 np0005480824 kernel: SELinux:  policy capability always_check_network=0
Oct 10 23:11:30 np0005480824 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 23:11:30 np0005480824 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 23:11:30 np0005480824 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 23:11:32 np0005480824 python3.9[43731]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:11:32 np0005480824 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 10 23:11:33 np0005480824 python3.9[43889]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:11:35 np0005480824 python3.9[44042]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:11:36 np0005480824 python3.9[44329]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 23:11:37 np0005480824 python3.9[44479]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:11:38 np0005480824 python3.9[44633]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:11:40 np0005480824 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 23:11:40 np0005480824 systemd[1]: Starting man-db-cache-update.service...
Oct 10 23:11:40 np0005480824 systemd[1]: Reloading.
Oct 10 23:11:40 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:11:40 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:11:40 np0005480824 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 23:11:42 np0005480824 python3.9[44950]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:11:42 np0005480824 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 10 23:11:42 np0005480824 systemd[1]: Stopped Network Manager Wait Online.
Oct 10 23:11:42 np0005480824 systemd[1]: Stopping Network Manager Wait Online...
Oct 10 23:11:42 np0005480824 systemd[1]: Stopping Network Manager...
Oct 10 23:11:42 np0005480824 NetworkManager[3939]: <info>  [1760152302.1832] caught SIGTERM, shutting down normally.
Oct 10 23:11:42 np0005480824 NetworkManager[3939]: <info>  [1760152302.1856] dhcp4 (eth0): canceled DHCP transaction
Oct 10 23:11:42 np0005480824 NetworkManager[3939]: <info>  [1760152302.1857] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 23:11:42 np0005480824 NetworkManager[3939]: <info>  [1760152302.1857] dhcp4 (eth0): state changed no lease
Oct 10 23:11:42 np0005480824 NetworkManager[3939]: <info>  [1760152302.1862] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 23:11:42 np0005480824 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 23:11:42 np0005480824 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 23:11:42 np0005480824 NetworkManager[3939]: <info>  [1760152302.3060] exiting (success)
Oct 10 23:11:42 np0005480824 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 10 23:11:42 np0005480824 systemd[1]: Stopped Network Manager.
Oct 10 23:11:42 np0005480824 systemd[1]: NetworkManager.service: Consumed 9.214s CPU time, 4.3M memory peak, read 0B from disk, written 30.0K to disk.
Oct 10 23:11:42 np0005480824 systemd[1]: Starting Network Manager...
Oct 10 23:11:42 np0005480824 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 23:11:42 np0005480824 systemd[1]: Finished man-db-cache-update.service.
Oct 10 23:11:42 np0005480824 systemd[1]: run-r3f0799f35a014d2ea4c84d66b112d5a3.service: Deactivated successfully.
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.3952] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:37201e2b-068e-4723-93b2-e25c9bbc9f0f)
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.3956] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.4040] manager[0x555a35311090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 10 23:11:42 np0005480824 systemd[1]: Starting Hostname Service...
Oct 10 23:11:42 np0005480824 systemd[1]: Started Hostname Service.
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5227] hostname: hostname: using hostnamed
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5229] hostname: static hostname changed from (none) to "compute-0"
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5233] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5237] manager[0x555a35311090]: rfkill: Wi-Fi hardware radio set enabled
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5238] manager[0x555a35311090]: rfkill: WWAN hardware radio set enabled
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5258] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5268] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5268] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5269] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5269] manager: Networking is enabled by state file
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5271] settings: Loaded settings plugin: keyfile (internal)
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5274] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5295] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5303] dhcp: init: Using DHCP client 'internal'
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5306] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5311] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5315] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5321] device (lo): Activation: starting connection 'lo' (0021d04c-82f9-4da3-814c-50b07db9d2ee)
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5327] device (eth0): carrier: link connected
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5330] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5333] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5334] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5338] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5342] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5348] device (eth1): carrier: link connected
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5351] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5354] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (0e05ba61-7743-5a06-87ca-88ddda9b0d31) (indicated)
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5354] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5358] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5363] device (eth1): Activation: starting connection 'ci-private-network' (0e05ba61-7743-5a06-87ca-88ddda9b0d31)
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5368] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 10 23:11:42 np0005480824 systemd[1]: Started Network Manager.
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5374] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5376] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5377] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5379] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5382] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5385] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5388] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5393] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5400] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5402] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5411] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5426] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5437] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5439] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5443] device (lo): Activation: successful, device activated.
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5461] dhcp4 (eth0): state changed new lease, address=38.102.83.68
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5470] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 10 23:11:42 np0005480824 systemd[1]: Starting Network Manager Wait Online...
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5901] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5918] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5925] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5927] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5931] device (eth1): Activation: successful, device activated.
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5949] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5950] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5952] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5955] device (eth0): Activation: successful, device activated.
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5960] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 10 23:11:42 np0005480824 NetworkManager[44969]: <info>  [1760152302.5963] manager: startup complete
Oct 10 23:11:42 np0005480824 systemd[1]: Finished Network Manager Wait Online.
Oct 10 23:11:43 np0005480824 python3.9[45177]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:11:49 np0005480824 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 23:11:49 np0005480824 systemd[1]: Starting man-db-cache-update.service...
Oct 10 23:11:49 np0005480824 systemd[1]: Reloading.
Oct 10 23:11:49 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:11:49 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:11:49 np0005480824 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 23:11:50 np0005480824 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 23:11:50 np0005480824 systemd[1]: Finished man-db-cache-update.service.
Oct 10 23:11:50 np0005480824 systemd[1]: run-r92c3efdce5e8425aa385962b63a642ec.service: Deactivated successfully.
Oct 10 23:11:51 np0005480824 python3.9[45641]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:11:52 np0005480824 python3.9[45793]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:11:52 np0005480824 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 23:11:53 np0005480824 python3.9[45947]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:11:54 np0005480824 python3.9[46099]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:11:55 np0005480824 python3.9[46251]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:11:55 np0005480824 python3.9[46403]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:11:56 np0005480824 python3.9[46555]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:11:57 np0005480824 python3.9[46678]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760152316.0517204-229-276796380373528/.source _original_basename=.yvlsnjkj follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:11:58 np0005480824 python3.9[46830]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:11:59 np0005480824 python3.9[46982]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 10 23:11:59 np0005480824 python3.9[47134]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:12:02 np0005480824 python3.9[47561]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 10 23:12:03 np0005480824 ansible-async_wrapper.py[47736]: Invoked with j550444950440 300 /home/zuul/.ansible/tmp/ansible-tmp-1760152322.3871496-295-165328452113138/AnsiballZ_edpm_os_net_config.py _
Oct 10 23:12:03 np0005480824 ansible-async_wrapper.py[47739]: Starting module and watcher
Oct 10 23:12:03 np0005480824 ansible-async_wrapper.py[47739]: Start watching 47740 (300)
Oct 10 23:12:03 np0005480824 ansible-async_wrapper.py[47740]: Start module (47740)
Oct 10 23:12:03 np0005480824 ansible-async_wrapper.py[47736]: Return async_wrapper task started.
Oct 10 23:12:03 np0005480824 python3.9[47741]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 10 23:12:04 np0005480824 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 10 23:12:04 np0005480824 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 10 23:12:04 np0005480824 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 10 23:12:04 np0005480824 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 10 23:12:04 np0005480824 kernel: cfg80211: failed to load regulatory.db
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.2683] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.2704] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3112] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3114] audit: op="connection-add" uuid="c9676367-783b-44d7-a537-c65138fa9ac5" name="br-ex-br" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3128] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3130] audit: op="connection-add" uuid="58c5a81c-150e-4e23-9d5c-67504a240989" name="br-ex-port" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3141] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3142] audit: op="connection-add" uuid="0b00b484-1097-4ffc-84bf-7c66598ed5e3" name="eth1-port" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3152] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3154] audit: op="connection-add" uuid="be718ae5-00e2-4d60-9753-397a2ac1a0d4" name="vlan20-port" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3164] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3165] audit: op="connection-add" uuid="5aa5f8b1-7024-488c-beb2-02a0fcd52a31" name="vlan21-port" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3175] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3177] audit: op="connection-add" uuid="cfdf38aa-6811-4674-a825-50866ad861b2" name="vlan22-port" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3187] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3189] audit: op="connection-add" uuid="c801041f-a206-49ba-b956-3683dbe7a013" name="vlan23-port" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3205] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3220] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3222] audit: op="connection-add" uuid="4b61bccc-cb19-4fe6-907f-97d6812f4d6d" name="br-ex-if" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3264] audit: op="connection-update" uuid="0e05ba61-7743-5a06-87ca-88ddda9b0d31" name="ci-private-network" args="ipv6.routing-rules,ipv6.method,ipv6.addr-gen-mode,ipv6.addresses,ipv6.dns,ipv6.routes,ovs-interface.type,ovs-external-ids.data,connection.controller,connection.port-type,connection.master,connection.slave-type,connection.timestamp,ipv4.routing-rules,ipv4.method,ipv4.addresses,ipv4.dns,ipv4.routes,ipv4.never-default" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3279] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3280] audit: op="connection-add" uuid="4c51eec0-6fcc-4639-b435-12aec12cffbc" name="vlan20-if" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3293] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3295] audit: op="connection-add" uuid="be013eb9-19a3-4649-b29d-d26caa575cd8" name="vlan21-if" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3310] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3311] audit: op="connection-add" uuid="1d6c4271-f782-4efb-87f1-2cdaa7a73867" name="vlan22-if" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3325] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3327] audit: op="connection-add" uuid="06f69b7f-292d-4b84-8517-44bbbc4be180" name="vlan23-if" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3337] audit: op="connection-delete" uuid="a34c4c6a-453d-333f-83bf-fb28f9879097" name="Wired connection 1" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3348] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3356] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3359] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (c9676367-783b-44d7-a537-c65138fa9ac5)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3360] audit: op="connection-activate" uuid="c9676367-783b-44d7-a537-c65138fa9ac5" name="br-ex-br" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3362] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3367] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3370] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (58c5a81c-150e-4e23-9d5c-67504a240989)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3371] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3376] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3379] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (0b00b484-1097-4ffc-84bf-7c66598ed5e3)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3381] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3386] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3389] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (be718ae5-00e2-4d60-9753-397a2ac1a0d4)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3390] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3396] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3401] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (5aa5f8b1-7024-488c-beb2-02a0fcd52a31)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3404] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3414] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3421] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (cfdf38aa-6811-4674-a825-50866ad861b2)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3424] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3433] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3438] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (c801041f-a206-49ba-b956-3683dbe7a013)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3440] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3443] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3446] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3454] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3460] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3465] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (4b61bccc-cb19-4fe6-907f-97d6812f4d6d)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3466] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3470] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3472] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3474] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3475] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3485] device (eth1): disconnecting for new activation request.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3486] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3489] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3492] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3493] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3497] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3502] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3506] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (4c51eec0-6fcc-4639-b435-12aec12cffbc)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3507] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3511] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3513] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3515] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3518] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3522] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3527] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (be013eb9-19a3-4649-b29d-d26caa575cd8)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3528] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3532] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3535] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3536] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3540] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3544] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3549] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (1d6c4271-f782-4efb-87f1-2cdaa7a73867)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3550] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3553] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3556] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3558] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3561] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3566] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3570] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (06f69b7f-292d-4b84-8517-44bbbc4be180)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3571] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3575] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3577] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3579] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3581] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3592] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3594] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3598] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3600] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3607] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3610] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3615] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3619] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3621] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 kernel: ovs-system: entered promiscuous mode
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3626] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3630] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3635] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3637] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3641] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3645] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 kernel: Timeout policy base is empty
Oct 10 23:12:05 np0005480824 systemd-udevd[47748]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3648] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3649] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3654] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3657] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3660] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3662] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3667] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3671] dhcp4 (eth0): canceled DHCP transaction
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3671] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3671] dhcp4 (eth0): state changed no lease
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3673] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 10 23:12:05 np0005480824 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3688] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3692] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47742 uid=0 result="fail" reason="Device is not activated"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3698] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3740] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3744] dhcp4 (eth0): state changed new lease, address=38.102.83.68
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3750] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3793] device (eth1): disconnecting for new activation request.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3793] audit: op="connection-activate" uuid="0e05ba61-7743-5a06-87ca-88ddda9b0d31" name="ci-private-network" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3798] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3826] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47742 uid=0 result="success"
Oct 10 23:12:05 np0005480824 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3898] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3912] device (eth1): Activation: starting connection 'ci-private-network' (0e05ba61-7743-5a06-87ca-88ddda9b0d31)
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3920] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3923] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3927] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3930] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3936] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3941] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3945] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3947] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3952] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3955] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3959] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3962] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3963] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3964] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3965] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3966] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3967] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3972] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3977] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3981] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3986] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3991] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.3995] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4000] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4005] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4009] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4013] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4015] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4022] device (eth1): Activation: successful, device activated.
Oct 10 23:12:05 np0005480824 kernel: br-ex: entered promiscuous mode
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4161] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4174] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 kernel: vlan22: entered promiscuous mode
Oct 10 23:12:05 np0005480824 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4211] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4213] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4218] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 23:12:05 np0005480824 kernel: vlan23: entered promiscuous mode
Oct 10 23:12:05 np0005480824 systemd-udevd[47746]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4321] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4335] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4359] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4360] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4368] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 23:12:05 np0005480824 kernel: vlan20: entered promiscuous mode
Oct 10 23:12:05 np0005480824 kernel: vlan21: entered promiscuous mode
Oct 10 23:12:05 np0005480824 systemd-udevd[47747]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4544] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4548] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4588] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4593] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4601] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4611] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4618] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4621] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4627] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4631] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4632] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4637] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4641] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4643] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 23:12:05 np0005480824 NetworkManager[44969]: <info>  [1760152325.4649] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 23:12:06 np0005480824 NetworkManager[44969]: <info>  [1760152326.5856] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47742 uid=0 result="success"
Oct 10 23:12:06 np0005480824 NetworkManager[44969]: <info>  [1760152326.8061] checkpoint[0x555a352e7950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 10 23:12:06 np0005480824 NetworkManager[44969]: <info>  [1760152326.8063] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47742 uid=0 result="success"
Oct 10 23:12:07 np0005480824 NetworkManager[44969]: <info>  [1760152327.0616] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47742 uid=0 result="success"
Oct 10 23:12:07 np0005480824 NetworkManager[44969]: <info>  [1760152327.0627] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47742 uid=0 result="success"
Oct 10 23:12:07 np0005480824 python3.9[48101]: ansible-ansible.legacy.async_status Invoked with jid=j550444950440.47736 mode=status _async_dir=/root/.ansible_async
Oct 10 23:12:07 np0005480824 NetworkManager[44969]: <info>  [1760152327.2245] audit: op="networking-control" arg="global-dns-configuration" pid=47742 uid=0 result="success"
Oct 10 23:12:07 np0005480824 NetworkManager[44969]: <info>  [1760152327.2266] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 10 23:12:07 np0005480824 NetworkManager[44969]: <info>  [1760152327.2290] audit: op="networking-control" arg="global-dns-configuration" pid=47742 uid=0 result="success"
Oct 10 23:12:07 np0005480824 NetworkManager[44969]: <info>  [1760152327.2312] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47742 uid=0 result="success"
Oct 10 23:12:07 np0005480824 NetworkManager[44969]: <info>  [1760152327.3619] checkpoint[0x555a352e7a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 10 23:12:07 np0005480824 NetworkManager[44969]: <info>  [1760152327.3628] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47742 uid=0 result="success"
Oct 10 23:12:07 np0005480824 ansible-async_wrapper.py[47740]: Module complete (47740)
Oct 10 23:12:08 np0005480824 ansible-async_wrapper.py[47739]: Done in kid B.
Oct 10 23:12:10 np0005480824 python3.9[48206]: ansible-ansible.legacy.async_status Invoked with jid=j550444950440.47736 mode=status _async_dir=/root/.ansible_async
Oct 10 23:12:11 np0005480824 python3.9[48305]: ansible-ansible.legacy.async_status Invoked with jid=j550444950440.47736 mode=cleanup _async_dir=/root/.ansible_async
Oct 10 23:12:12 np0005480824 python3.9[48457]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:12:12 np0005480824 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 23:12:12 np0005480824 python3.9[48583]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760152331.6777892-322-65926414835236/.source.returncode _original_basename=.e3ka1nvk follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:12:13 np0005480824 python3.9[48735]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:12:14 np0005480824 python3.9[48859]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760152333.1071556-338-249982474165763/.source.cfg _original_basename=.e_50x22x follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:12:15 np0005480824 python3.9[49011]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:12:15 np0005480824 systemd[1]: Reloading Network Manager...
Oct 10 23:12:15 np0005480824 NetworkManager[44969]: <info>  [1760152335.3975] audit: op="reload" arg="0" pid=49015 uid=0 result="success"
Oct 10 23:12:15 np0005480824 NetworkManager[44969]: <info>  [1760152335.3989] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 10 23:12:15 np0005480824 systemd[1]: Reloaded Network Manager.
Oct 10 23:12:15 np0005480824 systemd[1]: session-9.scope: Deactivated successfully.
Oct 10 23:12:15 np0005480824 systemd[1]: session-9.scope: Consumed 53.334s CPU time.
Oct 10 23:12:15 np0005480824 systemd-logind[782]: Session 9 logged out. Waiting for processes to exit.
Oct 10 23:12:15 np0005480824 systemd-logind[782]: Removed session 9.
Oct 10 23:12:21 np0005480824 systemd-logind[782]: New session 10 of user zuul.
Oct 10 23:12:21 np0005480824 systemd[1]: Started Session 10 of User zuul.
Oct 10 23:12:22 np0005480824 python3.9[49199]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:12:23 np0005480824 python3.9[49353]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:12:24 np0005480824 python3.9[49547]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:12:25 np0005480824 systemd[1]: session-10.scope: Deactivated successfully.
Oct 10 23:12:25 np0005480824 systemd[1]: session-10.scope: Consumed 2.431s CPU time.
Oct 10 23:12:25 np0005480824 systemd-logind[782]: Session 10 logged out. Waiting for processes to exit.
Oct 10 23:12:25 np0005480824 systemd-logind[782]: Removed session 10.
Oct 10 23:12:25 np0005480824 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 23:12:30 np0005480824 systemd-logind[782]: New session 11 of user zuul.
Oct 10 23:12:30 np0005480824 systemd[1]: Started Session 11 of User zuul.
Oct 10 23:12:31 np0005480824 python3.9[49729]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:12:32 np0005480824 python3.9[49883]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:12:33 np0005480824 python3.9[50040]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:12:34 np0005480824 python3.9[50124]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:12:36 np0005480824 python3.9[50278]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:12:38 np0005480824 python3.9[50473]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:12:39 np0005480824 python3.9[50625]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:12:39 np0005480824 podman[50626]: 2025-10-11 03:12:39.300172882 +0000 UTC m=+0.238256519 system refresh
Oct 10 23:12:40 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:12:40 np0005480824 python3.9[50787]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:12:41 np0005480824 python3.9[50910]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760152359.5426476-79-180130081673109/.source.json follow=False _original_basename=podman_network_config.j2 checksum=f13dfe3ecced50192fab42627a6beb4f83f76e8f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:12:41 np0005480824 python3.9[51062]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:12:42 np0005480824 python3.9[51185]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760152361.4195058-94-5035353097109/.source.conf follow=False _original_basename=registries.conf.j2 checksum=c1b134203bdd3a95b4783183288f3a6d1b057b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:12:43 np0005480824 python3.9[51337]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:12:44 np0005480824 python3.9[51489]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:12:45 np0005480824 python3.9[51641]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:12:45 np0005480824 python3.9[51793]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:12:46 np0005480824 python3.9[51945]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:12:49 np0005480824 python3.9[52098]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:12:49 np0005480824 python3.9[52252]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:12:50 np0005480824 python3.9[52404]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:12:51 np0005480824 python3.9[52556]: ansible-service_facts Invoked
Oct 10 23:12:51 np0005480824 network[52573]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 23:12:51 np0005480824 network[52574]: 'network-scripts' will be removed from distribution in near future.
Oct 10 23:12:51 np0005480824 network[52575]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 23:12:59 np0005480824 python3.9[53029]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:13:02 np0005480824 python3.9[53182]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 10 23:13:03 np0005480824 python3.9[53334]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:13:04 np0005480824 python3.9[53459]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760152383.199687-226-141344601037594/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:13:05 np0005480824 python3.9[53613]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:13:06 np0005480824 python3.9[53738]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760152384.954018-241-253896013491272/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:13:07 np0005480824 python3.9[53892]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:13:09 np0005480824 python3.9[54046]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:13:10 np0005480824 python3.9[54130]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:13:11 np0005480824 python3.9[54284]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:13:12 np0005480824 python3.9[54368]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:13:12 np0005480824 chronyd[794]: chronyd exiting
Oct 10 23:13:12 np0005480824 systemd[1]: Stopping NTP client/server...
Oct 10 23:13:12 np0005480824 systemd[1]: chronyd.service: Deactivated successfully.
Oct 10 23:13:12 np0005480824 systemd[1]: Stopped NTP client/server.
Oct 10 23:13:12 np0005480824 systemd[1]: Starting NTP client/server...
Oct 10 23:13:12 np0005480824 chronyd[54376]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 10 23:13:12 np0005480824 chronyd[54376]: Frequency -22.670 +/- 0.155 ppm read from /var/lib/chrony/drift
Oct 10 23:13:12 np0005480824 chronyd[54376]: Loaded seccomp filter (level 2)
Oct 10 23:13:12 np0005480824 systemd[1]: Started NTP client/server.
Oct 10 23:13:13 np0005480824 systemd[1]: session-11.scope: Deactivated successfully.
Oct 10 23:13:13 np0005480824 systemd[1]: session-11.scope: Consumed 30.149s CPU time.
Oct 10 23:13:13 np0005480824 systemd-logind[782]: Session 11 logged out. Waiting for processes to exit.
Oct 10 23:13:13 np0005480824 systemd-logind[782]: Removed session 11.
Oct 10 23:13:19 np0005480824 systemd-logind[782]: New session 12 of user zuul.
Oct 10 23:13:19 np0005480824 systemd[1]: Started Session 12 of User zuul.
Oct 10 23:13:20 np0005480824 python3.9[54557]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:13:21 np0005480824 python3.9[54709]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:13:22 np0005480824 python3.9[54832]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760152400.7127447-34-49285476014103/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:13:22 np0005480824 systemd[1]: session-12.scope: Deactivated successfully.
Oct 10 23:13:22 np0005480824 systemd[1]: session-12.scope: Consumed 1.944s CPU time.
Oct 10 23:13:22 np0005480824 systemd-logind[782]: Session 12 logged out. Waiting for processes to exit.
Oct 10 23:13:22 np0005480824 systemd-logind[782]: Removed session 12.
Oct 10 23:13:28 np0005480824 systemd-logind[782]: New session 13 of user zuul.
Oct 10 23:13:28 np0005480824 systemd[1]: Started Session 13 of User zuul.
Oct 10 23:13:29 np0005480824 python3.9[55010]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:13:31 np0005480824 python3.9[55166]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:13:32 np0005480824 python3.9[55341]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:13:33 np0005480824 python3.9[55464]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760152412.0869946-41-49281058828560/.source.json _original_basename=.obdyw7or follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:13:34 np0005480824 python3.9[55616]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:13:35 np0005480824 python3.9[55739]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760152414.2990382-64-4838883132882/.source _original_basename=.mjfa8id0 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:13:36 np0005480824 python3.9[55891]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:13:37 np0005480824 python3.9[56043]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:13:38 np0005480824 python3.9[56166]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760152416.7548664-88-62448036462074/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:13:38 np0005480824 python3.9[56318]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:13:39 np0005480824 python3.9[56441]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760152418.3506558-88-63934559850810/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:13:40 np0005480824 python3.9[56593]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:13:41 np0005480824 python3.9[56745]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:13:42 np0005480824 python3.9[56868]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760152420.646345-125-109625114253429/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:13:43 np0005480824 python3.9[57020]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:13:44 np0005480824 python3.9[57143]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760152422.5450668-140-17083326794691/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:13:45 np0005480824 python3.9[57295]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:13:45 np0005480824 systemd[1]: Reloading.
Oct 10 23:13:45 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:13:45 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:13:46 np0005480824 systemd[1]: Reloading.
Oct 10 23:13:46 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:13:46 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:14:52 np0005480824 systemd[1]: Starting EDPM Container Shutdown...
Oct 10 23:14:52 np0005480824 systemd[1]: Finished EDPM Container Shutdown.
Oct 10 23:14:53 np0005480824 python3.9[57525]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:14:54 np0005480824 python3.9[57648]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760152492.992357-163-69286572492492/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:14:54 np0005480824 python3.9[57800]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:14:55 np0005480824 python3.9[57923]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760152494.2854323-178-36631409003857/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:14:56 np0005480824 python3.9[58075]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:14:56 np0005480824 systemd[1]: Reloading.
Oct 10 23:14:56 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:14:56 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:14:56 np0005480824 systemd[1]: Reloading.
Oct 10 23:14:56 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:14:56 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:14:56 np0005480824 systemd[1]: Starting Create netns directory...
Oct 10 23:14:56 np0005480824 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 23:14:56 np0005480824 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 23:14:56 np0005480824 systemd[1]: Finished Create netns directory.
Oct 10 23:14:57 np0005480824 python3.9[58300]: ansible-ansible.builtin.service_facts Invoked
Oct 10 23:14:57 np0005480824 network[58317]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 23:14:57 np0005480824 network[58318]: 'network-scripts' will be removed from distribution in near future.
Oct 10 23:14:57 np0005480824 network[58319]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 23:15:11 np0005480824 python3.9[58584]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:15:11 np0005480824 systemd[1]: Reloading.
Oct 10 23:15:11 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:15:11 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:15:11 np0005480824 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 10 23:15:12 np0005480824 iptables.init[58624]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 10 23:15:12 np0005480824 iptables.init[58624]: iptables: Flushing firewall rules: [  OK  ]
Oct 10 23:15:12 np0005480824 systemd[1]: iptables.service: Deactivated successfully.
Oct 10 23:15:12 np0005480824 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 10 23:15:13 np0005480824 python3.9[58821]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:15:14 np0005480824 python3.9[58975]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:15:14 np0005480824 systemd[1]: Reloading.
Oct 10 23:15:14 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:15:14 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:15:14 np0005480824 systemd[1]: Starting Netfilter Tables...
Oct 10 23:15:14 np0005480824 systemd[1]: Finished Netfilter Tables.
Oct 10 23:15:16 np0005480824 python3.9[59166]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:15:17 np0005480824 python3.9[59319]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:15:18 np0005480824 python3.9[59444]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760152516.5893812-247-5950217802864/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:15:19 np0005480824 python3.9[59595]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:15:22 np0005480824 chronyd[54376]: Selected source 149.56.19.163 (pool.ntp.org)
Oct 10 23:15:45 np0005480824 systemd[1]: session-13.scope: Deactivated successfully.
Oct 10 23:15:45 np0005480824 systemd[1]: session-13.scope: Consumed 22.649s CPU time.
Oct 10 23:15:45 np0005480824 systemd-logind[782]: Session 13 logged out. Waiting for processes to exit.
Oct 10 23:15:45 np0005480824 systemd-logind[782]: Removed session 13.
Oct 10 23:15:57 np0005480824 systemd-logind[782]: New session 14 of user zuul.
Oct 10 23:15:57 np0005480824 systemd[1]: Started Session 14 of User zuul.
Oct 10 23:15:58 np0005480824 python3.9[59789]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:16:00 np0005480824 python3.9[59945]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:01 np0005480824 python3.9[60120]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:01 np0005480824 python3.9[60198]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.sbw6el3p recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:02 np0005480824 python3.9[60350]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:03 np0005480824 python3.9[60428]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.vvssp0m8 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:03 np0005480824 python3.9[60580]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:16:04 np0005480824 python3.9[60732]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:05 np0005480824 python3.9[60810]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:16:05 np0005480824 python3.9[60962]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:06 np0005480824 python3.9[61040]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:16:07 np0005480824 python3.9[61192]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:08 np0005480824 python3.9[61344]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:08 np0005480824 python3.9[61422]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:09 np0005480824 python3.9[61574]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:09 np0005480824 python3.9[61652]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:11 np0005480824 python3.9[61804]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:16:11 np0005480824 systemd[1]: Reloading.
Oct 10 23:16:11 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:16:11 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:16:12 np0005480824 python3.9[61993]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:12 np0005480824 python3.9[62071]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:13 np0005480824 python3.9[62223]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:14 np0005480824 python3.9[62301]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:15 np0005480824 python3.9[62453]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:16:15 np0005480824 systemd[1]: Reloading.
Oct 10 23:16:15 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:16:15 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:16:15 np0005480824 systemd[1]: Starting Create netns directory...
Oct 10 23:16:15 np0005480824 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 23:16:15 np0005480824 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 23:16:15 np0005480824 systemd[1]: Finished Create netns directory.
Oct 10 23:16:16 np0005480824 python3.9[62643]: ansible-ansible.builtin.service_facts Invoked
Oct 10 23:16:16 np0005480824 network[62660]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 23:16:16 np0005480824 network[62661]: 'network-scripts' will be removed from distribution in near future.
Oct 10 23:16:16 np0005480824 network[62662]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 23:16:21 np0005480824 python3.9[62925]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:21 np0005480824 python3.9[63003]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:22 np0005480824 python3.9[63155]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:23 np0005480824 python3.9[63307]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:24 np0005480824 python3.9[63430]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760152583.1713245-216-159858904742401/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:25 np0005480824 python3.9[63582]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 10 23:16:25 np0005480824 systemd[1]: Starting Time & Date Service...
Oct 10 23:16:25 np0005480824 systemd[1]: Started Time & Date Service.
Oct 10 23:16:26 np0005480824 python3.9[63738]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:27 np0005480824 python3.9[63890]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:28 np0005480824 python3.9[64013]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760152587.1183088-251-153169635552322/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:29 np0005480824 python3.9[64165]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:29 np0005480824 python3.9[64288]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760152588.6912305-266-186523442327744/.source.yaml _original_basename=.kg9r4j8v follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:30 np0005480824 python3.9[64440]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:31 np0005480824 python3.9[64563]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760152590.173436-281-162509690963165/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:32 np0005480824 python3.9[64715]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:16:33 np0005480824 python3.9[64868]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:16:34 np0005480824 python3[65021]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 23:16:35 np0005480824 python3.9[65173]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:36 np0005480824 python3.9[65296]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760152594.7073767-320-101715301791885/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:37 np0005480824 python3.9[65448]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:37 np0005480824 python3.9[65571]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760152596.4431899-335-177210573118559/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:38 np0005480824 python3.9[65723]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:39 np0005480824 python3.9[65846]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760152598.1166596-350-104077960476965/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:40 np0005480824 python3.9[65998]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:40 np0005480824 python3.9[66121]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760152599.693349-365-79412841151184/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:41 np0005480824 python3.9[66273]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:16:42 np0005480824 python3.9[66396]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760152601.1896865-380-23554565303808/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:43 np0005480824 python3.9[66548]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:44 np0005480824 python3.9[66700]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:16:45 np0005480824 python3.9[66859]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:46 np0005480824 python3.9[67012]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:47 np0005480824 python3.9[67164]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:16:48 np0005480824 python3.9[67316]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 23:16:49 np0005480824 python3.9[67469]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 23:16:50 np0005480824 systemd[1]: session-14.scope: Deactivated successfully.
Oct 10 23:16:50 np0005480824 systemd[1]: session-14.scope: Consumed 38.570s CPU time.
Oct 10 23:16:50 np0005480824 systemd-logind[782]: Session 14 logged out. Waiting for processes to exit.
Oct 10 23:16:50 np0005480824 systemd-logind[782]: Removed session 14.
Oct 10 23:16:55 np0005480824 systemd[1]: packagekit.service: Deactivated successfully.
Oct 10 23:16:55 np0005480824 systemd-logind[782]: New session 15 of user zuul.
Oct 10 23:16:55 np0005480824 systemd[1]: Started Session 15 of User zuul.
Oct 10 23:16:56 np0005480824 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 23:16:56 np0005480824 python3.9[67655]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 10 23:16:57 np0005480824 python3.9[67807]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:16:58 np0005480824 python3.9[67959]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:16:59 np0005480824 python3.9[68111]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuaghxuFn5N7g3goz6jbrCuMuntUZ/KPqqCfNc3GmoqpkCGnl9cL4t+DrEpTfDHAfkLeeRF9uL85ptfxRqGgNSiyvd6ROXYbkubfKL7ihbFefj28MUgBmxXyN6dLZJe5ctDokqTrz5xUs68UD7AX98wjV0CjvdN053AKQKgnIaXFC9GnKf7JFFGofUOHHFAyplUr5NLa7vMmueq5s8/BJji3itNm/SZhxGRrmnrIO8c7OyNz7mtHSx4jw67bT1IGMRXaB3lT36FavxSG9pVIIf5Z9C8ejT/CDdOqLyCPx4DilkmI9vESmmtizkmNkIJH4vli9DPR17VJQlsoiSX+1KhuYZFoNDapfW2LRZ3NZp+OBFrMhurnRRU4RW7/mU4jDioVC36a6Pd6lfacE1Ry0QxWpdnf4lA9VIQy4NFp/Lx8OLZHy4i+LVHUWYE68hPIvWV2Gi5FscGqT0LSnv3jo/1ZIAhcd2bAGcooManLGpZU3BYHmJQrn0Yu5iRIJr5I0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFaeUBbzAX9xKqQNRO4zBxAap0/KOun2IfzZdcCA8z0M#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMTXIERBPc8ILYMg5XePo7yQXX+O1LhwShKOskfgLVi04dlPv7WSDSt52XOdokKAKFBaRrtFt4Sftp0eim5u/R0=#012 create=True mode=0644 path=/tmp/ansible.al9o4lgz state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:17:00 np0005480824 python3.9[68263]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.al9o4lgz' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:17:01 np0005480824 python3.9[68418]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.al9o4lgz state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:17:02 np0005480824 systemd[1]: session-15.scope: Deactivated successfully.
Oct 10 23:17:02 np0005480824 systemd[1]: session-15.scope: Consumed 4.141s CPU time.
Oct 10 23:17:02 np0005480824 systemd-logind[782]: Session 15 logged out. Waiting for processes to exit.
Oct 10 23:17:02 np0005480824 systemd-logind[782]: Removed session 15.
Oct 10 23:17:07 np0005480824 systemd-logind[782]: New session 16 of user zuul.
Oct 10 23:17:07 np0005480824 systemd[1]: Started Session 16 of User zuul.
Oct 10 23:17:08 np0005480824 python3.9[68599]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:17:10 np0005480824 python3.9[68755]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 10 23:17:11 np0005480824 python3.9[68910]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:17:12 np0005480824 python3.9[69064]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:17:13 np0005480824 python3.9[69217]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:17:14 np0005480824 python3.9[69373]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:17:15 np0005480824 python3.9[69528]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:17:15 np0005480824 systemd[1]: session-16.scope: Deactivated successfully.
Oct 10 23:17:15 np0005480824 systemd[1]: session-16.scope: Consumed 5.477s CPU time.
Oct 10 23:17:15 np0005480824 systemd-logind[782]: Session 16 logged out. Waiting for processes to exit.
Oct 10 23:17:15 np0005480824 systemd-logind[782]: Removed session 16.
Oct 10 23:17:20 np0005480824 systemd-logind[782]: New session 17 of user zuul.
Oct 10 23:17:20 np0005480824 systemd[1]: Started Session 17 of User zuul.
Oct 10 23:17:21 np0005480824 python3.9[69706]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:17:23 np0005480824 python3.9[69862]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:17:24 np0005480824 python3.9[69946]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 23:17:26 np0005480824 python3.9[70097]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:17:27 np0005480824 python3.9[70248]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 23:17:28 np0005480824 python3.9[70398]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:17:28 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 23:17:28 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 23:17:29 np0005480824 python3.9[70549]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:17:30 np0005480824 systemd[1]: session-17.scope: Deactivated successfully.
Oct 10 23:17:30 np0005480824 systemd[1]: session-17.scope: Consumed 6.603s CPU time.
Oct 10 23:17:30 np0005480824 systemd-logind[782]: Session 17 logged out. Waiting for processes to exit.
Oct 10 23:17:30 np0005480824 systemd-logind[782]: Removed session 17.
Oct 10 23:17:37 np0005480824 systemd-logind[782]: New session 18 of user zuul.
Oct 10 23:17:37 np0005480824 systemd[1]: Started Session 18 of User zuul.
Oct 10 23:17:44 np0005480824 python3[71315]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:17:45 np0005480824 python3[71410]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 10 23:17:47 np0005480824 python3[71437]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:17:47 np0005480824 python3[71463]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:17:47 np0005480824 kernel: loop: module loaded
Oct 10 23:17:47 np0005480824 kernel: loop3: detected capacity change from 0 to 41943040
Oct 10 23:17:48 np0005480824 python3[71498]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:17:48 np0005480824 lvm[71501]: PV /dev/loop3 not used.
Oct 10 23:17:48 np0005480824 lvm[71503]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 23:17:48 np0005480824 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 10 23:17:48 np0005480824 lvm[71509]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct 10 23:17:48 np0005480824 lvm[71513]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 23:17:48 np0005480824 lvm[71513]: VG ceph_vg0 finished
Oct 10 23:17:48 np0005480824 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 10 23:17:48 np0005480824 python3[71591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 23:17:49 np0005480824 python3[71664]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760152668.5594764-32772-272764500996432/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:17:50 np0005480824 python3[71714]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:17:50 np0005480824 systemd[1]: Reloading.
Oct 10 23:17:50 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:17:50 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:17:50 np0005480824 systemd[1]: Starting Ceph OSD losetup...
Oct 10 23:17:50 np0005480824 bash[71755]: /dev/loop3: [64513]:4427970 (/var/lib/ceph-osd-0.img)
Oct 10 23:17:50 np0005480824 systemd[1]: Finished Ceph OSD losetup.
Oct 10 23:17:50 np0005480824 lvm[71756]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 23:17:50 np0005480824 lvm[71756]: VG ceph_vg0 finished
Oct 10 23:17:51 np0005480824 python3[71782]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 10 23:17:52 np0005480824 python3[71809]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:17:53 np0005480824 python3[71835]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:17:53 np0005480824 kernel: loop4: detected capacity change from 0 to 41943040
Oct 10 23:17:53 np0005480824 python3[71867]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:17:53 np0005480824 lvm[71870]: PV /dev/loop4 not used.
Oct 10 23:17:53 np0005480824 lvm[71881]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 10 23:17:53 np0005480824 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Oct 10 23:17:53 np0005480824 lvm[71883]:  1 logical volume(s) in volume group "ceph_vg1" now active
Oct 10 23:17:53 np0005480824 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Oct 10 23:17:54 np0005480824 python3[71961]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 23:17:54 np0005480824 python3[72034]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760152673.9789689-32799-7297954995915/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:17:55 np0005480824 python3[72084]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:17:55 np0005480824 systemd[1]: Reloading.
Oct 10 23:17:55 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:17:55 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:17:55 np0005480824 systemd[1]: Starting Ceph OSD losetup...
Oct 10 23:17:55 np0005480824 bash[72124]: /dev/loop4: [64513]:4427971 (/var/lib/ceph-osd-1.img)
Oct 10 23:17:55 np0005480824 systemd[1]: Finished Ceph OSD losetup.
Oct 10 23:17:55 np0005480824 lvm[72125]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 10 23:17:55 np0005480824 lvm[72125]: VG ceph_vg1 finished
Oct 10 23:17:55 np0005480824 python3[72151]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 10 23:17:57 np0005480824 python3[72178]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:17:57 np0005480824 python3[72204]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:17:57 np0005480824 kernel: loop5: detected capacity change from 0 to 41943040
Oct 10 23:17:58 np0005480824 python3[72236]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:17:58 np0005480824 lvm[72239]: PV /dev/loop5 not used.
Oct 10 23:17:58 np0005480824 lvm[72249]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 10 23:17:58 np0005480824 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Oct 10 23:17:58 np0005480824 lvm[72251]:  1 logical volume(s) in volume group "ceph_vg2" now active
Oct 10 23:17:58 np0005480824 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Oct 10 23:17:59 np0005480824 python3[72329]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 23:17:59 np0005480824 python3[72402]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760152678.8107262-32826-220988281075147/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:18:00 np0005480824 python3[72452]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:18:00 np0005480824 systemd[1]: Reloading.
Oct 10 23:18:00 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:18:00 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:18:00 np0005480824 systemd[1]: Starting Ceph OSD losetup...
Oct 10 23:18:00 np0005480824 bash[72494]: /dev/loop5: [64513]:4427972 (/var/lib/ceph-osd-2.img)
Oct 10 23:18:00 np0005480824 systemd[1]: Finished Ceph OSD losetup.
Oct 10 23:18:00 np0005480824 lvm[72495]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 10 23:18:00 np0005480824 lvm[72495]: VG ceph_vg2 finished
Oct 10 23:18:02 np0005480824 python3[72519]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:18:04 np0005480824 python3[72612]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 10 23:18:06 np0005480824 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 23:18:06 np0005480824 systemd[1]: Starting man-db-cache-update.service...
Oct 10 23:18:06 np0005480824 systemd[1]: Starting PackageKit Daemon...
Oct 10 23:18:06 np0005480824 systemd[1]: Started PackageKit Daemon.
Oct 10 23:18:07 np0005480824 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 23:18:07 np0005480824 systemd[1]: Finished man-db-cache-update.service.
Oct 10 23:18:07 np0005480824 systemd[1]: run-r6fca7a5f3219424f8941635e7a7256e0.service: Deactivated successfully.
Oct 10 23:18:07 np0005480824 python3[72731]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:18:07 np0005480824 python3[72759]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:18:08 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:08 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:08 np0005480824 python3[72822]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:18:09 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:09 np0005480824 python3[72848]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:18:09 np0005480824 python3[72926]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 23:18:10 np0005480824 python3[72999]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760152689.648969-32973-33362623887532/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:18:11 np0005480824 python3[73101]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 23:18:11 np0005480824 python3[73174]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760152690.9973855-32991-127672585870307/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:18:12 np0005480824 python3[73224]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:18:12 np0005480824 python3[73252]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:18:12 np0005480824 python3[73280]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:18:13 np0005480824 python3[73308]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:18:13 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:13 np0005480824 systemd-logind[782]: New session 19 of user ceph-admin.
Oct 10 23:18:13 np0005480824 systemd[1]: Created slice User Slice of UID 42477.
Oct 10 23:18:13 np0005480824 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 10 23:18:13 np0005480824 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 10 23:18:13 np0005480824 systemd[1]: Starting User Manager for UID 42477...
Oct 10 23:18:13 np0005480824 systemd[73329]: Queued start job for default target Main User Target.
Oct 10 23:18:13 np0005480824 systemd[73329]: Created slice User Application Slice.
Oct 10 23:18:13 np0005480824 systemd[73329]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 10 23:18:13 np0005480824 systemd[73329]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 23:18:13 np0005480824 systemd[73329]: Reached target Paths.
Oct 10 23:18:13 np0005480824 systemd[73329]: Reached target Timers.
Oct 10 23:18:13 np0005480824 systemd[73329]: Starting D-Bus User Message Bus Socket...
Oct 10 23:18:13 np0005480824 systemd[73329]: Starting Create User's Volatile Files and Directories...
Oct 10 23:18:13 np0005480824 systemd[73329]: Listening on D-Bus User Message Bus Socket.
Oct 10 23:18:13 np0005480824 systemd[73329]: Reached target Sockets.
Oct 10 23:18:13 np0005480824 systemd[73329]: Finished Create User's Volatile Files and Directories.
Oct 10 23:18:13 np0005480824 systemd[73329]: Reached target Basic System.
Oct 10 23:18:13 np0005480824 systemd[73329]: Reached target Main User Target.
Oct 10 23:18:13 np0005480824 systemd[73329]: Startup finished in 150ms.
Oct 10 23:18:13 np0005480824 systemd[1]: Started User Manager for UID 42477.
Oct 10 23:18:13 np0005480824 systemd[1]: Started Session 19 of User ceph-admin.
Oct 10 23:18:14 np0005480824 systemd[1]: session-19.scope: Deactivated successfully.
Oct 10 23:18:14 np0005480824 systemd-logind[782]: Session 19 logged out. Waiting for processes to exit.
Oct 10 23:18:14 np0005480824 systemd-logind[782]: Removed session 19.
Oct 10 23:18:16 np0005480824 systemd[1]: var-lib-containers-storage-overlay-compat1740039165-lower\x2dmapped.mount: Deactivated successfully.
Oct 10 23:18:24 np0005480824 systemd[1]: Stopping User Manager for UID 42477...
Oct 10 23:18:24 np0005480824 systemd[73329]: Activating special unit Exit the Session...
Oct 10 23:18:24 np0005480824 systemd[73329]: Stopped target Main User Target.
Oct 10 23:18:24 np0005480824 systemd[73329]: Stopped target Basic System.
Oct 10 23:18:24 np0005480824 systemd[73329]: Stopped target Paths.
Oct 10 23:18:24 np0005480824 systemd[73329]: Stopped target Sockets.
Oct 10 23:18:24 np0005480824 systemd[73329]: Stopped target Timers.
Oct 10 23:18:24 np0005480824 systemd[73329]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 10 23:18:24 np0005480824 systemd[73329]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 10 23:18:24 np0005480824 systemd[73329]: Closed D-Bus User Message Bus Socket.
Oct 10 23:18:24 np0005480824 systemd[73329]: Stopped Create User's Volatile Files and Directories.
Oct 10 23:18:24 np0005480824 systemd[73329]: Removed slice User Application Slice.
Oct 10 23:18:24 np0005480824 systemd[73329]: Reached target Shutdown.
Oct 10 23:18:24 np0005480824 systemd[73329]: Finished Exit the Session.
Oct 10 23:18:24 np0005480824 systemd[73329]: Reached target Exit the Session.
Oct 10 23:18:24 np0005480824 systemd[1]: user@42477.service: Deactivated successfully.
Oct 10 23:18:24 np0005480824 systemd[1]: Stopped User Manager for UID 42477.
Oct 10 23:18:24 np0005480824 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 10 23:18:24 np0005480824 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 10 23:18:24 np0005480824 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 10 23:18:24 np0005480824 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 10 23:18:24 np0005480824 systemd[1]: Removed slice User Slice of UID 42477.
Oct 10 23:18:29 np0005480824 podman[73383]: 2025-10-11 03:18:29.426541572 +0000 UTC m=+15.229881715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:29 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:29 np0005480824 podman[73444]: 2025-10-11 03:18:29.50186187 +0000 UTC m=+0.046807015 container create e0ecf930905f1d9b454faf0303d018d07018427673e7f0a9b015fc7227b0766e (image=quay.io/ceph/ceph:v18, name=affectionate_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 10 23:18:29 np0005480824 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck761535738-merged.mount: Deactivated successfully.
Oct 10 23:18:29 np0005480824 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 10 23:18:29 np0005480824 systemd[1]: Started libpod-conmon-e0ecf930905f1d9b454faf0303d018d07018427673e7f0a9b015fc7227b0766e.scope.
Oct 10 23:18:29 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:29 np0005480824 podman[73444]: 2025-10-11 03:18:29.483024598 +0000 UTC m=+0.027969763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:29 np0005480824 podman[73444]: 2025-10-11 03:18:29.586938191 +0000 UTC m=+0.131883386 container init e0ecf930905f1d9b454faf0303d018d07018427673e7f0a9b015fc7227b0766e (image=quay.io/ceph/ceph:v18, name=affectionate_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 23:18:29 np0005480824 podman[73444]: 2025-10-11 03:18:29.594873644 +0000 UTC m=+0.139818799 container start e0ecf930905f1d9b454faf0303d018d07018427673e7f0a9b015fc7227b0766e (image=quay.io/ceph/ceph:v18, name=affectionate_ride, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:18:29 np0005480824 podman[73444]: 2025-10-11 03:18:29.599142542 +0000 UTC m=+0.144087727 container attach e0ecf930905f1d9b454faf0303d018d07018427673e7f0a9b015fc7227b0766e (image=quay.io/ceph/ceph:v18, name=affectionate_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:29 np0005480824 affectionate_ride[73460]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 10 23:18:29 np0005480824 systemd[1]: libpod-e0ecf930905f1d9b454faf0303d018d07018427673e7f0a9b015fc7227b0766e.scope: Deactivated successfully.
Oct 10 23:18:29 np0005480824 podman[73444]: 2025-10-11 03:18:29.907769672 +0000 UTC m=+0.452714827 container died e0ecf930905f1d9b454faf0303d018d07018427673e7f0a9b015fc7227b0766e (image=quay.io/ceph/ceph:v18, name=affectionate_ride, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct 10 23:18:29 np0005480824 podman[73444]: 2025-10-11 03:18:29.96609845 +0000 UTC m=+0.511043595 container remove e0ecf930905f1d9b454faf0303d018d07018427673e7f0a9b015fc7227b0766e (image=quay.io/ceph/ceph:v18, name=affectionate_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:18:29 np0005480824 systemd[1]: libpod-conmon-e0ecf930905f1d9b454faf0303d018d07018427673e7f0a9b015fc7227b0766e.scope: Deactivated successfully.
Oct 10 23:18:30 np0005480824 podman[73478]: 2025-10-11 03:18:30.041702245 +0000 UTC m=+0.047501751 container create 364af981746fe75d1a4bc37ea9b4128a9111b446ed70727c6c6c848bc939e19d (image=quay.io/ceph/ceph:v18, name=happy_wozniak, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:18:30 np0005480824 systemd[1]: Started libpod-conmon-364af981746fe75d1a4bc37ea9b4128a9111b446ed70727c6c6c848bc939e19d.scope.
Oct 10 23:18:30 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:30 np0005480824 podman[73478]: 2025-10-11 03:18:30.021677876 +0000 UTC m=+0.027477372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:30 np0005480824 podman[73478]: 2025-10-11 03:18:30.140629675 +0000 UTC m=+0.146429231 container init 364af981746fe75d1a4bc37ea9b4128a9111b446ed70727c6c6c848bc939e19d (image=quay.io/ceph/ceph:v18, name=happy_wozniak, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:18:30 np0005480824 podman[73478]: 2025-10-11 03:18:30.151319869 +0000 UTC m=+0.157119335 container start 364af981746fe75d1a4bc37ea9b4128a9111b446ed70727c6c6c848bc939e19d (image=quay.io/ceph/ceph:v18, name=happy_wozniak, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:18:30 np0005480824 happy_wozniak[73495]: 167 167
Oct 10 23:18:30 np0005480824 systemd[1]: libpod-364af981746fe75d1a4bc37ea9b4128a9111b446ed70727c6c6c848bc939e19d.scope: Deactivated successfully.
Oct 10 23:18:30 np0005480824 podman[73478]: 2025-10-11 03:18:30.183190011 +0000 UTC m=+0.188989507 container attach 364af981746fe75d1a4bc37ea9b4128a9111b446ed70727c6c6c848bc939e19d (image=quay.io/ceph/ceph:v18, name=happy_wozniak, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:18:30 np0005480824 podman[73478]: 2025-10-11 03:18:30.184044301 +0000 UTC m=+0.189843807 container died 364af981746fe75d1a4bc37ea9b4128a9111b446ed70727c6c6c848bc939e19d (image=quay.io/ceph/ceph:v18, name=happy_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 23:18:30 np0005480824 podman[73478]: 2025-10-11 03:18:30.232931632 +0000 UTC m=+0.238731098 container remove 364af981746fe75d1a4bc37ea9b4128a9111b446ed70727c6c6c848bc939e19d (image=quay.io/ceph/ceph:v18, name=happy_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 10 23:18:30 np0005480824 systemd[1]: libpod-conmon-364af981746fe75d1a4bc37ea9b4128a9111b446ed70727c6c6c848bc939e19d.scope: Deactivated successfully.
Oct 10 23:18:30 np0005480824 podman[73511]: 2025-10-11 03:18:30.327015791 +0000 UTC m=+0.070896678 container create ff69471208316b4488881a3a12e5057ceb8784c88139bbdc4f6b4a730c7ee82f (image=quay.io/ceph/ceph:v18, name=wonderful_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:18:30 np0005480824 podman[73511]: 2025-10-11 03:18:30.281954657 +0000 UTC m=+0.025835584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:30 np0005480824 systemd[1]: Started libpod-conmon-ff69471208316b4488881a3a12e5057ceb8784c88139bbdc4f6b4a730c7ee82f.scope.
Oct 10 23:18:30 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:30 np0005480824 podman[73511]: 2025-10-11 03:18:30.422709176 +0000 UTC m=+0.166590153 container init ff69471208316b4488881a3a12e5057ceb8784c88139bbdc4f6b4a730c7ee82f (image=quay.io/ceph/ceph:v18, name=wonderful_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:18:30 np0005480824 systemd[1]: var-lib-containers-storage-overlay-bd6f571bfc9195fb74a76d73920aa58f20c8daa6c89b96e9fa28e81e1d6e4bcf-merged.mount: Deactivated successfully.
Oct 10 23:18:30 np0005480824 podman[73511]: 2025-10-11 03:18:30.432149052 +0000 UTC m=+0.176029929 container start ff69471208316b4488881a3a12e5057ceb8784c88139bbdc4f6b4a730c7ee82f (image=quay.io/ceph/ceph:v18, name=wonderful_poitras, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:30 np0005480824 podman[73511]: 2025-10-11 03:18:30.444712871 +0000 UTC m=+0.188593768 container attach ff69471208316b4488881a3a12e5057ceb8784c88139bbdc4f6b4a730c7ee82f (image=quay.io/ceph/ceph:v18, name=wonderful_poitras, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 23:18:30 np0005480824 wonderful_poitras[73528]: AQCGzOlo40TSGxAAhA+cuG/gbji+eEsqIY8mVA==
Oct 10 23:18:30 np0005480824 systemd[1]: libpod-ff69471208316b4488881a3a12e5057ceb8784c88139bbdc4f6b4a730c7ee82f.scope: Deactivated successfully.
Oct 10 23:18:30 np0005480824 podman[73511]: 2025-10-11 03:18:30.470849261 +0000 UTC m=+0.214730198 container died ff69471208316b4488881a3a12e5057ceb8784c88139bbdc4f6b4a730c7ee82f (image=quay.io/ceph/ceph:v18, name=wonderful_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:18:30 np0005480824 systemd[1]: var-lib-containers-storage-overlay-40a803018096d02a5daafa78095a64d6766ae96b586d7ab1467528e04afa9a03-merged.mount: Deactivated successfully.
Oct 10 23:18:30 np0005480824 podman[73511]: 2025-10-11 03:18:30.60378456 +0000 UTC m=+0.347665478 container remove ff69471208316b4488881a3a12e5057ceb8784c88139bbdc4f6b4a730c7ee82f (image=quay.io/ceph/ceph:v18, name=wonderful_poitras, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:18:30 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:30 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:30 np0005480824 systemd[1]: libpod-conmon-ff69471208316b4488881a3a12e5057ceb8784c88139bbdc4f6b4a730c7ee82f.scope: Deactivated successfully.
Oct 10 23:18:30 np0005480824 podman[73547]: 2025-10-11 03:18:30.662868236 +0000 UTC m=+0.031542265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:30 np0005480824 podman[73547]: 2025-10-11 03:18:30.764558358 +0000 UTC m=+0.133232287 container create 5c400d09411ad87eab52c6d239792632ab97ed81ec6ae7f69926186736e9d28c (image=quay.io/ceph/ceph:v18, name=confident_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 23:18:30 np0005480824 systemd[1]: Started libpod-conmon-5c400d09411ad87eab52c6d239792632ab97ed81ec6ae7f69926186736e9d28c.scope.
Oct 10 23:18:30 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:31 np0005480824 podman[73547]: 2025-10-11 03:18:31.551990464 +0000 UTC m=+0.920664403 container init 5c400d09411ad87eab52c6d239792632ab97ed81ec6ae7f69926186736e9d28c (image=quay.io/ceph/ceph:v18, name=confident_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:18:31 np0005480824 podman[73547]: 2025-10-11 03:18:31.562583066 +0000 UTC m=+0.931256985 container start 5c400d09411ad87eab52c6d239792632ab97ed81ec6ae7f69926186736e9d28c (image=quay.io/ceph/ceph:v18, name=confident_keldysh, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:18:31 np0005480824 confident_keldysh[73563]: AQCHzOlojArsIxAAxeitce8QtDbrNwypZ8NbvA==
Oct 10 23:18:31 np0005480824 systemd[1]: libpod-5c400d09411ad87eab52c6d239792632ab97ed81ec6ae7f69926186736e9d28c.scope: Deactivated successfully.
Oct 10 23:18:31 np0005480824 podman[73547]: 2025-10-11 03:18:31.62113776 +0000 UTC m=+0.989811679 container attach 5c400d09411ad87eab52c6d239792632ab97ed81ec6ae7f69926186736e9d28c (image=quay.io/ceph/ceph:v18, name=confident_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:18:31 np0005480824 podman[73547]: 2025-10-11 03:18:31.62158038 +0000 UTC m=+0.990254309 container died 5c400d09411ad87eab52c6d239792632ab97ed81ec6ae7f69926186736e9d28c (image=quay.io/ceph/ceph:v18, name=confident_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:18:31 np0005480824 systemd[1]: var-lib-containers-storage-overlay-f451cb982913384acc8305e77f45efc32f3c964de9acb9d0ee219f7f3292b337-merged.mount: Deactivated successfully.
Oct 10 23:18:31 np0005480824 podman[73547]: 2025-10-11 03:18:31.836787197 +0000 UTC m=+1.205461156 container remove 5c400d09411ad87eab52c6d239792632ab97ed81ec6ae7f69926186736e9d28c (image=quay.io/ceph/ceph:v18, name=confident_keldysh, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:31 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:31 np0005480824 systemd[1]: libpod-conmon-5c400d09411ad87eab52c6d239792632ab97ed81ec6ae7f69926186736e9d28c.scope: Deactivated successfully.
Oct 10 23:18:31 np0005480824 podman[73584]: 2025-10-11 03:18:31.94761873 +0000 UTC m=+0.084153992 container create 3d197d9fe85e17f231180f3b4624192b96b863bbe04c885b0db16caeff538278 (image=quay.io/ceph/ceph:v18, name=distracted_edison, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:31 np0005480824 podman[73584]: 2025-10-11 03:18:31.893622741 +0000 UTC m=+0.030158043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:32 np0005480824 systemd[1]: Started libpod-conmon-3d197d9fe85e17f231180f3b4624192b96b863bbe04c885b0db16caeff538278.scope.
Oct 10 23:18:32 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:32 np0005480824 podman[73584]: 2025-10-11 03:18:32.13152091 +0000 UTC m=+0.268056242 container init 3d197d9fe85e17f231180f3b4624192b96b863bbe04c885b0db16caeff538278 (image=quay.io/ceph/ceph:v18, name=distracted_edison, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:32 np0005480824 podman[73584]: 2025-10-11 03:18:32.141376096 +0000 UTC m=+0.277911378 container start 3d197d9fe85e17f231180f3b4624192b96b863bbe04c885b0db16caeff538278 (image=quay.io/ceph/ceph:v18, name=distracted_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:18:32 np0005480824 podman[73584]: 2025-10-11 03:18:32.158288663 +0000 UTC m=+0.294824015 container attach 3d197d9fe85e17f231180f3b4624192b96b863bbe04c885b0db16caeff538278 (image=quay.io/ceph/ceph:v18, name=distracted_edison, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:18:32 np0005480824 distracted_edison[73601]: AQCIzOlob8KvChAApT5Y282jtsx7RbqnpynjYg==
Oct 10 23:18:32 np0005480824 systemd[1]: libpod-3d197d9fe85e17f231180f3b4624192b96b863bbe04c885b0db16caeff538278.scope: Deactivated successfully.
Oct 10 23:18:32 np0005480824 podman[73584]: 2025-10-11 03:18:32.185245742 +0000 UTC m=+0.321781034 container died 3d197d9fe85e17f231180f3b4624192b96b863bbe04c885b0db16caeff538278 (image=quay.io/ceph/ceph:v18, name=distracted_edison, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:32 np0005480824 podman[73584]: 2025-10-11 03:18:32.414739487 +0000 UTC m=+0.551274749 container remove 3d197d9fe85e17f231180f3b4624192b96b863bbe04c885b0db16caeff538278 (image=quay.io/ceph/ceph:v18, name=distracted_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:18:32 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:32 np0005480824 systemd[1]: libpod-conmon-3d197d9fe85e17f231180f3b4624192b96b863bbe04c885b0db16caeff538278.scope: Deactivated successfully.
Oct 10 23:18:32 np0005480824 podman[73622]: 2025-10-11 03:18:32.549251083 +0000 UTC m=+0.097903568 container create 26238bb2809ac663afbaf392f5c89829d9d9e2ec8274cef4a56993051548604d (image=quay.io/ceph/ceph:v18, name=flamboyant_cori, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:18:32 np0005480824 podman[73622]: 2025-10-11 03:18:32.49071593 +0000 UTC m=+0.039368455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:32 np0005480824 systemd[1]: Started libpod-conmon-26238bb2809ac663afbaf392f5c89829d9d9e2ec8274cef4a56993051548604d.scope.
Oct 10 23:18:32 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56457e30f1323aa1e70a42dcd6a6d778507765eb2f7404a335c0c3ccf7dc8868/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:32 np0005480824 podman[73622]: 2025-10-11 03:18:32.700033102 +0000 UTC m=+0.248685567 container init 26238bb2809ac663afbaf392f5c89829d9d9e2ec8274cef4a56993051548604d (image=quay.io/ceph/ceph:v18, name=flamboyant_cori, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:18:32 np0005480824 podman[73622]: 2025-10-11 03:18:32.706151362 +0000 UTC m=+0.254803807 container start 26238bb2809ac663afbaf392f5c89829d9d9e2ec8274cef4a56993051548604d (image=quay.io/ceph/ceph:v18, name=flamboyant_cori, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:18:32 np0005480824 podman[73622]: 2025-10-11 03:18:32.723719456 +0000 UTC m=+0.272371911 container attach 26238bb2809ac663afbaf392f5c89829d9d9e2ec8274cef4a56993051548604d (image=quay.io/ceph/ceph:v18, name=flamboyant_cori, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:18:32 np0005480824 flamboyant_cori[73639]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct 10 23:18:32 np0005480824 flamboyant_cori[73639]: setting min_mon_release = pacific
Oct 10 23:18:32 np0005480824 flamboyant_cori[73639]: /usr/bin/monmaptool: set fsid to 92cfe4d4-4917-5be1-9d00-73758793a62b
Oct 10 23:18:32 np0005480824 flamboyant_cori[73639]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct 10 23:18:32 np0005480824 systemd[1]: libpod-26238bb2809ac663afbaf392f5c89829d9d9e2ec8274cef4a56993051548604d.scope: Deactivated successfully.
Oct 10 23:18:32 np0005480824 podman[73622]: 2025-10-11 03:18:32.74742191 +0000 UTC m=+0.296074365 container died 26238bb2809ac663afbaf392f5c89829d9d9e2ec8274cef4a56993051548604d (image=quay.io/ceph/ceph:v18, name=flamboyant_cori, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:18:32 np0005480824 systemd[1]: var-lib-containers-storage-overlay-56457e30f1323aa1e70a42dcd6a6d778507765eb2f7404a335c0c3ccf7dc8868-merged.mount: Deactivated successfully.
Oct 10 23:18:32 np0005480824 podman[73622]: 2025-10-11 03:18:32.97063399 +0000 UTC m=+0.519286455 container remove 26238bb2809ac663afbaf392f5c89829d9d9e2ec8274cef4a56993051548604d (image=quay.io/ceph/ceph:v18, name=flamboyant_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 23:18:32 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:32 np0005480824 systemd[1]: libpod-conmon-26238bb2809ac663afbaf392f5c89829d9d9e2ec8274cef4a56993051548604d.scope: Deactivated successfully.
Oct 10 23:18:33 np0005480824 podman[73658]: 2025-10-11 03:18:33.091950914 +0000 UTC m=+0.029507818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:33 np0005480824 podman[73658]: 2025-10-11 03:18:33.487370146 +0000 UTC m=+0.424927040 container create 7213818e758a3077ab5da40b61676968046309f0990128a11a2682ba1ce447a4 (image=quay.io/ceph/ceph:v18, name=recursing_brahmagupta, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:33 np0005480824 systemd[1]: Started libpod-conmon-7213818e758a3077ab5da40b61676968046309f0990128a11a2682ba1ce447a4.scope.
Oct 10 23:18:33 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec9e87d99311be5d921522818e6e4735d4f57c9c716a6534eb693181ba11254/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec9e87d99311be5d921522818e6e4735d4f57c9c716a6534eb693181ba11254/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec9e87d99311be5d921522818e6e4735d4f57c9c716a6534eb693181ba11254/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec9e87d99311be5d921522818e6e4735d4f57c9c716a6534eb693181ba11254/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:33 np0005480824 podman[73658]: 2025-10-11 03:18:33.574676548 +0000 UTC m=+0.512233492 container init 7213818e758a3077ab5da40b61676968046309f0990128a11a2682ba1ce447a4 (image=quay.io/ceph/ceph:v18, name=recursing_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 10 23:18:33 np0005480824 podman[73658]: 2025-10-11 03:18:33.589360506 +0000 UTC m=+0.526917360 container start 7213818e758a3077ab5da40b61676968046309f0990128a11a2682ba1ce447a4 (image=quay.io/ceph/ceph:v18, name=recursing_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:33 np0005480824 podman[73658]: 2025-10-11 03:18:33.593519561 +0000 UTC m=+0.531076525 container attach 7213818e758a3077ab5da40b61676968046309f0990128a11a2682ba1ce447a4 (image=quay.io/ceph/ceph:v18, name=recursing_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 10 23:18:33 np0005480824 systemd[1]: libpod-7213818e758a3077ab5da40b61676968046309f0990128a11a2682ba1ce447a4.scope: Deactivated successfully.
Oct 10 23:18:33 np0005480824 podman[73658]: 2025-10-11 03:18:33.703375051 +0000 UTC m=+0.640931905 container died 7213818e758a3077ab5da40b61676968046309f0990128a11a2682ba1ce447a4 (image=quay.io/ceph/ceph:v18, name=recursing_brahmagupta, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 23:18:33 np0005480824 systemd[1]: var-lib-containers-storage-overlay-dec9e87d99311be5d921522818e6e4735d4f57c9c716a6534eb693181ba11254-merged.mount: Deactivated successfully.
Oct 10 23:18:33 np0005480824 podman[73658]: 2025-10-11 03:18:33.744808772 +0000 UTC m=+0.682365626 container remove 7213818e758a3077ab5da40b61676968046309f0990128a11a2682ba1ce447a4 (image=quay.io/ceph/ceph:v18, name=recursing_brahmagupta, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:33 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:33 np0005480824 systemd[1]: libpod-conmon-7213818e758a3077ab5da40b61676968046309f0990128a11a2682ba1ce447a4.scope: Deactivated successfully.
Oct 10 23:18:33 np0005480824 systemd[1]: Reloading.
Oct 10 23:18:33 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:18:33 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:18:34 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:34 np0005480824 systemd[1]: Reloading.
Oct 10 23:18:34 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:18:34 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:18:34 np0005480824 systemd[1]: Reached target All Ceph clusters and services.
Oct 10 23:18:34 np0005480824 systemd[1]: Reloading.
Oct 10 23:18:34 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:18:34 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:18:34 np0005480824 systemd[1]: Reached target Ceph cluster 92cfe4d4-4917-5be1-9d00-73758793a62b.
Oct 10 23:18:34 np0005480824 systemd[1]: Reloading.
Oct 10 23:18:34 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:18:34 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:18:34 np0005480824 systemd[1]: Reloading.
Oct 10 23:18:34 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:18:34 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:18:35 np0005480824 systemd[1]: Created slice Slice /system/ceph-92cfe4d4-4917-5be1-9d00-73758793a62b.
Oct 10 23:18:35 np0005480824 systemd[1]: Reached target System Time Set.
Oct 10 23:18:35 np0005480824 systemd[1]: Reached target System Time Synchronized.
Oct 10 23:18:35 np0005480824 systemd[1]: Starting Ceph mon.compute-0 for 92cfe4d4-4917-5be1-9d00-73758793a62b...
Oct 10 23:18:35 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:35 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:35 np0005480824 podman[73954]: 2025-10-11 03:18:35.425738935 +0000 UTC m=+0.055502053 container create cc1ffd14f782506e30e77fbfca0de309b4d51a0d052ef06225accfb38eb0c899 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:18:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c93d18c6bba41c27f7a1f1b13536ba015894fc5a07317f1c94828da7330896f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c93d18c6bba41c27f7a1f1b13536ba015894fc5a07317f1c94828da7330896f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c93d18c6bba41c27f7a1f1b13536ba015894fc5a07317f1c94828da7330896f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c93d18c6bba41c27f7a1f1b13536ba015894fc5a07317f1c94828da7330896f1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:35 np0005480824 podman[73954]: 2025-10-11 03:18:35.396381283 +0000 UTC m=+0.026144491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:35 np0005480824 podman[73954]: 2025-10-11 03:18:35.5056822 +0000 UTC m=+0.135445348 container init cc1ffd14f782506e30e77fbfca0de309b4d51a0d052ef06225accfb38eb0c899 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 23:18:35 np0005480824 podman[73954]: 2025-10-11 03:18:35.516971279 +0000 UTC m=+0.146734397 container start cc1ffd14f782506e30e77fbfca0de309b4d51a0d052ef06225accfb38eb0c899 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:18:35 np0005480824 bash[73954]: cc1ffd14f782506e30e77fbfca0de309b4d51a0d052ef06225accfb38eb0c899
Oct 10 23:18:35 np0005480824 systemd[1]: Started Ceph mon.compute-0 for 92cfe4d4-4917-5be1-9d00-73758793a62b.
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: pidfile_write: ignore empty --pid-file
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: load: jerasure load: lrc 
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: RocksDB version: 7.9.2
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Git sha 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: DB SUMMARY
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: DB Session ID:  PWC1BZZG7VW2DQYEVV1U
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: CURRENT file:  CURRENT
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                         Options.error_if_exists: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                       Options.create_if_missing: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                                     Options.env: 0x562792fdac40
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                                Options.info_log: 0x562793fb8e80
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                              Options.statistics: (nil)
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                               Options.use_fsync: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                              Options.db_log_dir: 
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                                 Options.wal_dir: 
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                    Options.write_buffer_manager: 0x562793fc8b40
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                  Options.unordered_write: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                               Options.row_cache: None
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                              Options.wal_filter: None
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.two_write_queues: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.wal_compression: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.atomic_flush: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.max_background_jobs: 2
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.max_background_compactions: -1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.max_subcompactions: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.max_total_wal_size: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                          Options.max_open_files: -1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:       Options.compaction_readahead_size: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Compression algorithms supported:
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: #011kZSTD supported: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: #011kXpressCompression supported: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: #011kBZip2Compression supported: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: #011kLZ4Compression supported: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: #011kZlibCompression supported: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: #011kSnappyCompression supported: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:           Options.merge_operator: 
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:        Options.compaction_filter: None
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562793fb8a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562793fb11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:        Options.write_buffer_size: 33554432
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:  Options.max_write_buffer_number: 2
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:          Options.compression: NoCompression
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.num_levels: 7
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152715588498, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152715590757, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "PWC1BZZG7VW2DQYEVV1U", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152715590956, "job": 1, "event": "recovery_finished"}
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562793fdae00
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: DB pointer 0x562794064000
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562793fb11f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 92cfe4d4-4917-5be1-9d00-73758793a62b
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@-1(???) e0 preinit fsid 92cfe4d4-4917-5be1-9d00-73758793a62b
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(probing) e0 win_standalone_election
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 10 23:18:35 np0005480824 podman[73974]: 2025-10-11 03:18:35.623129004 +0000 UTC m=+0.061629345 container create 8b7ecd7cefca512dc89bce59d44c00a48c3730b0deaccc8a58037a0e26fa6428 (image=quay.io/ceph/ceph:v18, name=boring_aryabhata, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-11T03:18:33.647451Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864356,os=Linux}
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).mds e1 new map
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: log_channel(cluster) log [DBG] : fsmap 
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mkfs 92cfe4d4-4917-5be1-9d00-73758793a62b
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 10 23:18:35 np0005480824 systemd[1]: Started libpod-conmon-8b7ecd7cefca512dc89bce59d44c00a48c3730b0deaccc8a58037a0e26fa6428.scope.
Oct 10 23:18:35 np0005480824 ceph-mon[73973]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 10 23:18:35 np0005480824 podman[73974]: 2025-10-11 03:18:35.593990166 +0000 UTC m=+0.032490577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:35 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a49274c3a5ddb7d0fde0389804672e7ad9bdf89814be75ac91f18ad9519570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a49274c3a5ddb7d0fde0389804672e7ad9bdf89814be75ac91f18ad9519570/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a49274c3a5ddb7d0fde0389804672e7ad9bdf89814be75ac91f18ad9519570/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:35 np0005480824 podman[73974]: 2025-10-11 03:18:35.727959059 +0000 UTC m=+0.166459390 container init 8b7ecd7cefca512dc89bce59d44c00a48c3730b0deaccc8a58037a0e26fa6428 (image=quay.io/ceph/ceph:v18, name=boring_aryabhata, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 23:18:35 np0005480824 podman[73974]: 2025-10-11 03:18:35.738721976 +0000 UTC m=+0.177222297 container start 8b7ecd7cefca512dc89bce59d44c00a48c3730b0deaccc8a58037a0e26fa6428 (image=quay.io/ceph/ceph:v18, name=boring_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:35 np0005480824 podman[73974]: 2025-10-11 03:18:35.7419712 +0000 UTC m=+0.180471541 container attach 8b7ecd7cefca512dc89bce59d44c00a48c3730b0deaccc8a58037a0e26fa6428 (image=quay.io/ceph/ceph:v18, name=boring_aryabhata, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:18:36 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 10 23:18:36 np0005480824 ceph-mon[73973]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3390344887' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]:  cluster:
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]:    id:     92cfe4d4-4917-5be1-9d00-73758793a62b
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]:    health: HEALTH_OK
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]: 
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]:  services:
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]:    mon: 1 daemons, quorum compute-0 (age 0.528665s)
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]:    mgr: no daemons active
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]:    osd: 0 osds: 0 up, 0 in
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]: 
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]:  data:
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]:    pools:   0 pools, 0 pgs
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]:    objects: 0 objects, 0 B
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]:    usage:   0 B used, 0 B / 0 B avail
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]:    pgs:     
Oct 10 23:18:36 np0005480824 boring_aryabhata[74028]: 
Oct 10 23:18:36 np0005480824 systemd[1]: libpod-8b7ecd7cefca512dc89bce59d44c00a48c3730b0deaccc8a58037a0e26fa6428.scope: Deactivated successfully.
Oct 10 23:18:36 np0005480824 podman[73974]: 2025-10-11 03:18:36.17739896 +0000 UTC m=+0.615899281 container died 8b7ecd7cefca512dc89bce59d44c00a48c3730b0deaccc8a58037a0e26fa6428 (image=quay.io/ceph/ceph:v18, name=boring_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:18:36 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d9a49274c3a5ddb7d0fde0389804672e7ad9bdf89814be75ac91f18ad9519570-merged.mount: Deactivated successfully.
Oct 10 23:18:36 np0005480824 podman[73974]: 2025-10-11 03:18:36.218989474 +0000 UTC m=+0.657489795 container remove 8b7ecd7cefca512dc89bce59d44c00a48c3730b0deaccc8a58037a0e26fa6428 (image=quay.io/ceph/ceph:v18, name=boring_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 23:18:36 np0005480824 systemd[1]: libpod-conmon-8b7ecd7cefca512dc89bce59d44c00a48c3730b0deaccc8a58037a0e26fa6428.scope: Deactivated successfully.
Oct 10 23:18:36 np0005480824 podman[74067]: 2025-10-11 03:18:36.300232828 +0000 UTC m=+0.061331488 container create 0dc88e42af3458bac4b1c15cac5f999893547eddd67518eff26f20cdc8e6f543 (image=quay.io/ceph/ceph:v18, name=loving_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:36 np0005480824 systemd[1]: Started libpod-conmon-0dc88e42af3458bac4b1c15cac5f999893547eddd67518eff26f20cdc8e6f543.scope.
Oct 10 23:18:36 np0005480824 podman[74067]: 2025-10-11 03:18:36.26848308 +0000 UTC m=+0.029581820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:36 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:36 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9535205360cf63cb0ef700d0cd0b3d4a44fe1257e139d03d44d2dddcee25bdcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:36 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9535205360cf63cb0ef700d0cd0b3d4a44fe1257e139d03d44d2dddcee25bdcd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:36 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9535205360cf63cb0ef700d0cd0b3d4a44fe1257e139d03d44d2dddcee25bdcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:36 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9535205360cf63cb0ef700d0cd0b3d4a44fe1257e139d03d44d2dddcee25bdcd/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:36 np0005480824 podman[74067]: 2025-10-11 03:18:36.39139626 +0000 UTC m=+0.152494930 container init 0dc88e42af3458bac4b1c15cac5f999893547eddd67518eff26f20cdc8e6f543 (image=quay.io/ceph/ceph:v18, name=loving_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 23:18:36 np0005480824 podman[74067]: 2025-10-11 03:18:36.404109851 +0000 UTC m=+0.165208521 container start 0dc88e42af3458bac4b1c15cac5f999893547eddd67518eff26f20cdc8e6f543 (image=quay.io/ceph/ceph:v18, name=loving_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:18:36 np0005480824 podman[74067]: 2025-10-11 03:18:36.408462461 +0000 UTC m=+0.169561131 container attach 0dc88e42af3458bac4b1c15cac5f999893547eddd67518eff26f20cdc8e6f543 (image=quay.io/ceph/ceph:v18, name=loving_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 10 23:18:36 np0005480824 ceph-mon[73973]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 10 23:18:36 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 10 23:18:36 np0005480824 ceph-mon[73973]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/403811765' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 10 23:18:36 np0005480824 ceph-mon[73973]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/403811765' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 10 23:18:36 np0005480824 loving_zhukovsky[74084]: 
Oct 10 23:18:36 np0005480824 loving_zhukovsky[74084]: [global]
Oct 10 23:18:36 np0005480824 loving_zhukovsky[74084]: #011fsid = 92cfe4d4-4917-5be1-9d00-73758793a62b
Oct 10 23:18:36 np0005480824 loving_zhukovsky[74084]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 10 23:18:36 np0005480824 loving_zhukovsky[74084]: #011osd_crush_chooseleaf_type = 0
Oct 10 23:18:36 np0005480824 systemd[1]: libpod-0dc88e42af3458bac4b1c15cac5f999893547eddd67518eff26f20cdc8e6f543.scope: Deactivated successfully.
Oct 10 23:18:36 np0005480824 podman[74067]: 2025-10-11 03:18:36.795810888 +0000 UTC m=+0.556909528 container died 0dc88e42af3458bac4b1c15cac5f999893547eddd67518eff26f20cdc8e6f543 (image=quay.io/ceph/ceph:v18, name=loving_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 10 23:18:36 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9535205360cf63cb0ef700d0cd0b3d4a44fe1257e139d03d44d2dddcee25bdcd-merged.mount: Deactivated successfully.
Oct 10 23:18:36 np0005480824 podman[74067]: 2025-10-11 03:18:36.844540306 +0000 UTC m=+0.605638946 container remove 0dc88e42af3458bac4b1c15cac5f999893547eddd67518eff26f20cdc8e6f543 (image=quay.io/ceph/ceph:v18, name=loving_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 10 23:18:36 np0005480824 systemd[1]: libpod-conmon-0dc88e42af3458bac4b1c15cac5f999893547eddd67518eff26f20cdc8e6f543.scope: Deactivated successfully.
Oct 10 23:18:36 np0005480824 podman[74122]: 2025-10-11 03:18:36.941411878 +0000 UTC m=+0.067403357 container create c44ed1dffc01d70a3df783816031702a8c72f511d9f07c0e9340e73e19d154f2 (image=quay.io/ceph/ceph:v18, name=trusting_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 23:18:36 np0005480824 systemd[1]: Started libpod-conmon-c44ed1dffc01d70a3df783816031702a8c72f511d9f07c0e9340e73e19d154f2.scope.
Oct 10 23:18:37 np0005480824 podman[74122]: 2025-10-11 03:18:36.910579401 +0000 UTC m=+0.036570930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:37 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085fac1b562209893d6f57eaa0cc1864d5fc7ec69c3ca1b471a21e0060bc0c7d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085fac1b562209893d6f57eaa0cc1864d5fc7ec69c3ca1b471a21e0060bc0c7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085fac1b562209893d6f57eaa0cc1864d5fc7ec69c3ca1b471a21e0060bc0c7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085fac1b562209893d6f57eaa0cc1864d5fc7ec69c3ca1b471a21e0060bc0c7d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:37 np0005480824 podman[74122]: 2025-10-11 03:18:37.060412908 +0000 UTC m=+0.186404397 container init c44ed1dffc01d70a3df783816031702a8c72f511d9f07c0e9340e73e19d154f2 (image=quay.io/ceph/ceph:v18, name=trusting_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:18:37 np0005480824 podman[74122]: 2025-10-11 03:18:37.073763005 +0000 UTC m=+0.199754494 container start c44ed1dffc01d70a3df783816031702a8c72f511d9f07c0e9340e73e19d154f2 (image=quay.io/ceph/ceph:v18, name=trusting_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:18:37 np0005480824 podman[74122]: 2025-10-11 03:18:37.078718768 +0000 UTC m=+0.204710287 container attach c44ed1dffc01d70a3df783816031702a8c72f511d9f07c0e9340e73e19d154f2 (image=quay.io/ceph/ceph:v18, name=trusting_albattani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:18:37 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:18:37 np0005480824 ceph-mon[73973]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2493874489' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:18:37 np0005480824 systemd[1]: libpod-c44ed1dffc01d70a3df783816031702a8c72f511d9f07c0e9340e73e19d154f2.scope: Deactivated successfully.
Oct 10 23:18:37 np0005480824 podman[74122]: 2025-10-11 03:18:37.48338453 +0000 UTC m=+0.609376019 container died c44ed1dffc01d70a3df783816031702a8c72f511d9f07c0e9340e73e19d154f2 (image=quay.io/ceph/ceph:v18, name=trusting_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 10 23:18:37 np0005480824 systemd[1]: var-lib-containers-storage-overlay-085fac1b562209893d6f57eaa0cc1864d5fc7ec69c3ca1b471a21e0060bc0c7d-merged.mount: Deactivated successfully.
Oct 10 23:18:37 np0005480824 podman[74122]: 2025-10-11 03:18:37.536323223 +0000 UTC m=+0.662314672 container remove c44ed1dffc01d70a3df783816031702a8c72f511d9f07c0e9340e73e19d154f2 (image=quay.io/ceph/ceph:v18, name=trusting_albattani, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:18:37 np0005480824 systemd[1]: libpod-conmon-c44ed1dffc01d70a3df783816031702a8c72f511d9f07c0e9340e73e19d154f2.scope: Deactivated successfully.
Oct 10 23:18:37 np0005480824 systemd[1]: Stopping Ceph mon.compute-0 for 92cfe4d4-4917-5be1-9d00-73758793a62b...
Oct 10 23:18:37 np0005480824 ceph-mon[73973]: from='client.? 192.168.122.100:0/403811765' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 10 23:18:37 np0005480824 ceph-mon[73973]: from='client.? 192.168.122.100:0/403811765' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 10 23:18:37 np0005480824 ceph-mon[73973]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 10 23:18:37 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 10 23:18:37 np0005480824 ceph-mon[73973]: mon.compute-0@0(leader) e1 shutdown
Oct 10 23:18:37 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0[73969]: 2025-10-11T03:18:37.800+0000 7fb24579e640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 10 23:18:37 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0[73969]: 2025-10-11T03:18:37.800+0000 7fb24579e640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 10 23:18:37 np0005480824 ceph-mon[73973]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 10 23:18:37 np0005480824 ceph-mon[73973]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 10 23:18:37 np0005480824 podman[74207]: 2025-10-11 03:18:37.841295038 +0000 UTC m=+0.076059708 container died cc1ffd14f782506e30e77fbfca0de309b4d51a0d052ef06225accfb38eb0c899 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:18:37 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c93d18c6bba41c27f7a1f1b13536ba015894fc5a07317f1c94828da7330896f1-merged.mount: Deactivated successfully.
Oct 10 23:18:37 np0005480824 podman[74207]: 2025-10-11 03:18:37.886877055 +0000 UTC m=+0.121641685 container remove cc1ffd14f782506e30e77fbfca0de309b4d51a0d052ef06225accfb38eb0c899 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:18:37 np0005480824 bash[74207]: ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0
Oct 10 23:18:37 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:37 np0005480824 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 23:18:38 np0005480824 systemd[1]: ceph-92cfe4d4-4917-5be1-9d00-73758793a62b@mon.compute-0.service: Deactivated successfully.
Oct 10 23:18:38 np0005480824 systemd[1]: Stopped Ceph mon.compute-0 for 92cfe4d4-4917-5be1-9d00-73758793a62b.
Oct 10 23:18:38 np0005480824 systemd[1]: ceph-92cfe4d4-4917-5be1-9d00-73758793a62b@mon.compute-0.service: Consumed 1.202s CPU time.
Oct 10 23:18:38 np0005480824 systemd[1]: Starting Ceph mon.compute-0 for 92cfe4d4-4917-5be1-9d00-73758793a62b...
Oct 10 23:18:38 np0005480824 podman[74307]: 2025-10-11 03:18:38.295736554 +0000 UTC m=+0.046542275 container create a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 10 23:18:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab7257e1e85cd1f6c4036bb37443e36f37b063e8b3d62e0aa6943aa2aa39de8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab7257e1e85cd1f6c4036bb37443e36f37b063e8b3d62e0aa6943aa2aa39de8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab7257e1e85cd1f6c4036bb37443e36f37b063e8b3d62e0aa6943aa2aa39de8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab7257e1e85cd1f6c4036bb37443e36f37b063e8b3d62e0aa6943aa2aa39de8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:38 np0005480824 podman[74307]: 2025-10-11 03:18:38.276692029 +0000 UTC m=+0.027497810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:38 np0005480824 podman[74307]: 2025-10-11 03:18:38.373228298 +0000 UTC m=+0.124034059 container init a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:18:38 np0005480824 podman[74307]: 2025-10-11 03:18:38.393660249 +0000 UTC m=+0.144465980 container start a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:18:38 np0005480824 bash[74307]: a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05
Oct 10 23:18:38 np0005480824 systemd[1]: Started Ceph mon.compute-0 for 92cfe4d4-4917-5be1-9d00-73758793a62b.
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: pidfile_write: ignore empty --pid-file
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: load: jerasure load: lrc 
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: RocksDB version: 7.9.2
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Git sha 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: DB SUMMARY
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: DB Session ID:  RJ9TM4FJNNQ2AWDFT4YB
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: CURRENT file:  CURRENT
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55672 ; 
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                         Options.error_if_exists: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                       Options.create_if_missing: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                                     Options.env: 0x5617da55bc40
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                                Options.info_log: 0x5617dbc8d040
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                              Options.statistics: (nil)
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                               Options.use_fsync: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                              Options.db_log_dir: 
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                                 Options.wal_dir: 
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                    Options.write_buffer_manager: 0x5617dbc9cb40
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                  Options.unordered_write: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                               Options.row_cache: None
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                              Options.wal_filter: None
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.two_write_queues: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.wal_compression: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.atomic_flush: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.max_background_jobs: 2
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.max_background_compactions: -1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.max_subcompactions: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.max_total_wal_size: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                          Options.max_open_files: -1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:       Options.compaction_readahead_size: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Compression algorithms supported:
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: #011kZSTD supported: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: #011kXpressCompression supported: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: #011kBZip2Compression supported: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: #011kLZ4Compression supported: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: #011kZlibCompression supported: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: #011kSnappyCompression supported: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:           Options.merge_operator: 
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:        Options.compaction_filter: None
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5617dbc8cc40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5617dbc851f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:        Options.write_buffer_size: 33554432
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:  Options.max_write_buffer_number: 2
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:          Options.compression: NoCompression
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.num_levels: 7
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152718431667, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152718436312, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53793, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51382, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152718, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152718436415, "job": 1, "event": "recovery_finished"}
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5617dbcaee00
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: DB pointer 0x5617dbd38000
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.59 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.59 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5617dbc851f0#2 capacity: 512.00 MB usage: 25.89 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,25.11 KB,0.00478923%) FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 92cfe4d4-4917-5be1-9d00-73758793a62b
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: mon.compute-0@-1(???) e1 preinit fsid 92cfe4d4-4917-5be1-9d00-73758793a62b
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: mon.compute-0@-1(???).mds e1 new map
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : fsmap 
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 10 23:18:38 np0005480824 podman[74327]: 2025-10-11 03:18:38.49290657 +0000 UTC m=+0.053073768 container create d25b96dfacf5f33770d201e5653d9d638dafb03f69c40936648278fabbf32030 (image=quay.io/ceph/ceph:v18, name=gifted_benz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 10 23:18:38 np0005480824 systemd[1]: Started libpod-conmon-d25b96dfacf5f33770d201e5653d9d638dafb03f69c40936648278fabbf32030.scope.
Oct 10 23:18:38 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240f5eb5995ae03edd23593eac94bbdc22706034f0c9f4c93749757e2c7e06ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240f5eb5995ae03edd23593eac94bbdc22706034f0c9f4c93749757e2c7e06ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240f5eb5995ae03edd23593eac94bbdc22706034f0c9f4c93749757e2c7e06ab/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:38 np0005480824 podman[74327]: 2025-10-11 03:18:38.46835311 +0000 UTC m=+0.028520358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:38 np0005480824 podman[74327]: 2025-10-11 03:18:38.570714883 +0000 UTC m=+0.130882131 container init d25b96dfacf5f33770d201e5653d9d638dafb03f69c40936648278fabbf32030 (image=quay.io/ceph/ceph:v18, name=gifted_benz, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:18:38 np0005480824 podman[74327]: 2025-10-11 03:18:38.585130365 +0000 UTC m=+0.145297563 container start d25b96dfacf5f33770d201e5653d9d638dafb03f69c40936648278fabbf32030 (image=quay.io/ceph/ceph:v18, name=gifted_benz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:18:38 np0005480824 podman[74327]: 2025-10-11 03:18:38.588114904 +0000 UTC m=+0.148282122 container attach d25b96dfacf5f33770d201e5653d9d638dafb03f69c40936648278fabbf32030 (image=quay.io/ceph/ceph:v18, name=gifted_benz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 10 23:18:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct 10 23:18:38 np0005480824 systemd[1]: libpod-d25b96dfacf5f33770d201e5653d9d638dafb03f69c40936648278fabbf32030.scope: Deactivated successfully.
Oct 10 23:18:38 np0005480824 podman[74327]: 2025-10-11 03:18:38.969232977 +0000 UTC m=+0.529400175 container died d25b96dfacf5f33770d201e5653d9d638dafb03f69c40936648278fabbf32030 (image=quay.io/ceph/ceph:v18, name=gifted_benz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:18:38 np0005480824 systemd[1]: var-lib-containers-storage-overlay-240f5eb5995ae03edd23593eac94bbdc22706034f0c9f4c93749757e2c7e06ab-merged.mount: Deactivated successfully.
Oct 10 23:18:39 np0005480824 podman[74327]: 2025-10-11 03:18:39.016709825 +0000 UTC m=+0.576877023 container remove d25b96dfacf5f33770d201e5653d9d638dafb03f69c40936648278fabbf32030 (image=quay.io/ceph/ceph:v18, name=gifted_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:18:39 np0005480824 systemd[1]: libpod-conmon-d25b96dfacf5f33770d201e5653d9d638dafb03f69c40936648278fabbf32030.scope: Deactivated successfully.
Oct 10 23:18:39 np0005480824 podman[74418]: 2025-10-11 03:18:39.09272137 +0000 UTC m=+0.044167301 container create 7f82cd84ffaedcb24270967bf293bb04aa608c6eb0f26b96209cb1403dd1c8b9 (image=quay.io/ceph/ceph:v18, name=cranky_clarke, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 10 23:18:39 np0005480824 systemd[1]: Started libpod-conmon-7f82cd84ffaedcb24270967bf293bb04aa608c6eb0f26b96209cb1403dd1c8b9.scope.
Oct 10 23:18:39 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:39 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982e9290743c39448fcf9b3758a32a9c3a59804f568062b509bbc84b399300bd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:39 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982e9290743c39448fcf9b3758a32a9c3a59804f568062b509bbc84b399300bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:39 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982e9290743c39448fcf9b3758a32a9c3a59804f568062b509bbc84b399300bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:39 np0005480824 podman[74418]: 2025-10-11 03:18:39.074697583 +0000 UTC m=+0.026143544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:39 np0005480824 podman[74418]: 2025-10-11 03:18:39.184725739 +0000 UTC m=+0.136171740 container init 7f82cd84ffaedcb24270967bf293bb04aa608c6eb0f26b96209cb1403dd1c8b9 (image=quay.io/ceph/ceph:v18, name=cranky_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 10 23:18:39 np0005480824 podman[74418]: 2025-10-11 03:18:39.190702747 +0000 UTC m=+0.142148688 container start 7f82cd84ffaedcb24270967bf293bb04aa608c6eb0f26b96209cb1403dd1c8b9 (image=quay.io/ceph/ceph:v18, name=cranky_clarke, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:18:39 np0005480824 podman[74418]: 2025-10-11 03:18:39.196046349 +0000 UTC m=+0.147492270 container attach 7f82cd84ffaedcb24270967bf293bb04aa608c6eb0f26b96209cb1403dd1c8b9 (image=quay.io/ceph/ceph:v18, name=cranky_clarke, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:18:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct 10 23:18:39 np0005480824 systemd[1]: libpod-7f82cd84ffaedcb24270967bf293bb04aa608c6eb0f26b96209cb1403dd1c8b9.scope: Deactivated successfully.
Oct 10 23:18:39 np0005480824 podman[74418]: 2025-10-11 03:18:39.602651648 +0000 UTC m=+0.554097569 container died 7f82cd84ffaedcb24270967bf293bb04aa608c6eb0f26b96209cb1403dd1c8b9 (image=quay.io/ceph/ceph:v18, name=cranky_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 10 23:18:39 np0005480824 systemd[1]: var-lib-containers-storage-overlay-982e9290743c39448fcf9b3758a32a9c3a59804f568062b509bbc84b399300bd-merged.mount: Deactivated successfully.
Oct 10 23:18:39 np0005480824 podman[74418]: 2025-10-11 03:18:39.65893368 +0000 UTC m=+0.610379641 container remove 7f82cd84ffaedcb24270967bf293bb04aa608c6eb0f26b96209cb1403dd1c8b9 (image=quay.io/ceph/ceph:v18, name=cranky_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:39 np0005480824 systemd[1]: libpod-conmon-7f82cd84ffaedcb24270967bf293bb04aa608c6eb0f26b96209cb1403dd1c8b9.scope: Deactivated successfully.
Oct 10 23:18:39 np0005480824 systemd[1]: Reloading.
Oct 10 23:18:39 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:18:39 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:18:40 np0005480824 systemd[1]: Reloading.
Oct 10 23:18:40 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:18:40 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:18:40 np0005480824 systemd[1]: Starting Ceph mgr.compute-0.pdyrua for 92cfe4d4-4917-5be1-9d00-73758793a62b...
Oct 10 23:18:40 np0005480824 podman[74598]: 2025-10-11 03:18:40.622130632 +0000 UTC m=+0.055166364 container create 5396d33f03d79fbf0e6626513c29ad41ed11bd6b7439ed7e048b771ff7bb44ba (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08d0c353c41da7206b42681633aec861b88746aeb6dcadae20d769749654ae1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08d0c353c41da7206b42681633aec861b88746aeb6dcadae20d769749654ae1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08d0c353c41da7206b42681633aec861b88746aeb6dcadae20d769749654ae1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08d0c353c41da7206b42681633aec861b88746aeb6dcadae20d769749654ae1/merged/var/lib/ceph/mgr/ceph-compute-0.pdyrua supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:40 np0005480824 podman[74598]: 2025-10-11 03:18:40.589558899 +0000 UTC m=+0.022594691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:40 np0005480824 podman[74598]: 2025-10-11 03:18:40.694579953 +0000 UTC m=+0.127615725 container init 5396d33f03d79fbf0e6626513c29ad41ed11bd6b7439ed7e048b771ff7bb44ba (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 10 23:18:40 np0005480824 podman[74598]: 2025-10-11 03:18:40.706001726 +0000 UTC m=+0.139037448 container start 5396d33f03d79fbf0e6626513c29ad41ed11bd6b7439ed7e048b771ff7bb44ba (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 10 23:18:40 np0005480824 bash[74598]: 5396d33f03d79fbf0e6626513c29ad41ed11bd6b7439ed7e048b771ff7bb44ba
Oct 10 23:18:40 np0005480824 systemd[1]: Started Ceph mgr.compute-0.pdyrua for 92cfe4d4-4917-5be1-9d00-73758793a62b.
Oct 10 23:18:40 np0005480824 ceph-mgr[74617]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 23:18:40 np0005480824 ceph-mgr[74617]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 10 23:18:40 np0005480824 ceph-mgr[74617]: pidfile_write: ignore empty --pid-file
Oct 10 23:18:40 np0005480824 podman[74618]: 2025-10-11 03:18:40.828342138 +0000 UTC m=+0.068353303 container create ead08ef4189b641f7b16f88ab029472ff2947b269fe45282675b066a3b0d9424 (image=quay.io/ceph/ceph:v18, name=condescending_visvesvaraya, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 10 23:18:40 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'alerts'
Oct 10 23:18:40 np0005480824 systemd[1]: Started libpod-conmon-ead08ef4189b641f7b16f88ab029472ff2947b269fe45282675b066a3b0d9424.scope.
Oct 10 23:18:40 np0005480824 podman[74618]: 2025-10-11 03:18:40.796439962 +0000 UTC m=+0.036451157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:40 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754aa6e15bcef8eb101fab76a8abe8a9bc31d0d25283ce797531c0d22632a6fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754aa6e15bcef8eb101fab76a8abe8a9bc31d0d25283ce797531c0d22632a6fc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754aa6e15bcef8eb101fab76a8abe8a9bc31d0d25283ce797531c0d22632a6fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:40 np0005480824 podman[74618]: 2025-10-11 03:18:40.958778656 +0000 UTC m=+0.198789881 container init ead08ef4189b641f7b16f88ab029472ff2947b269fe45282675b066a3b0d9424 (image=quay.io/ceph/ceph:v18, name=condescending_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 10 23:18:40 np0005480824 podman[74618]: 2025-10-11 03:18:40.973052224 +0000 UTC m=+0.213063389 container start ead08ef4189b641f7b16f88ab029472ff2947b269fe45282675b066a3b0d9424 (image=quay.io/ceph/ceph:v18, name=condescending_visvesvaraya, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:18:40 np0005480824 podman[74618]: 2025-10-11 03:18:40.978695524 +0000 UTC m=+0.218706749 container attach ead08ef4189b641f7b16f88ab029472ff2947b269fe45282675b066a3b0d9424 (image=quay.io/ceph/ceph:v18, name=condescending_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:18:41 np0005480824 ceph-mgr[74617]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 23:18:41 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'balancer'
Oct 10 23:18:41 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:41.169+0000 7ff2c3f8e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 23:18:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 10 23:18:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201081999' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]: 
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]: {
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    "fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    "health": {
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "status": "HEALTH_OK",
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "checks": {},
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "mutes": []
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    },
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    "election_epoch": 5,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    "quorum": [
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        0
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    ],
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    "quorum_names": [
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "compute-0"
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    ],
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    "quorum_age": 2,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    "monmap": {
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "epoch": 1,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "min_mon_release_name": "reef",
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "num_mons": 1
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    },
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    "osdmap": {
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "epoch": 1,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "num_osds": 0,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "num_up_osds": 0,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "osd_up_since": 0,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "num_in_osds": 0,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "osd_in_since": 0,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "num_remapped_pgs": 0
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    },
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    "pgmap": {
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "pgs_by_state": [],
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "num_pgs": 0,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "num_pools": 0,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "num_objects": 0,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "data_bytes": 0,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "bytes_used": 0,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "bytes_avail": 0,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "bytes_total": 0
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    },
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    "fsmap": {
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "epoch": 1,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "by_rank": [],
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "up:standby": 0
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    },
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    "mgrmap": {
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "available": false,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "num_standbys": 0,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "modules": [
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:            "iostat",
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:            "nfs",
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:            "restful"
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        ],
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "services": {}
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    },
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    "servicemap": {
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "epoch": 1,
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "modified": "2025-10-11T03:18:35.640728+0000",
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:        "services": {}
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    },
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]:    "progress_events": {}
Oct 10 23:18:41 np0005480824 condescending_visvesvaraya[74658]: }
Oct 10 23:18:41 np0005480824 systemd[1]: libpod-ead08ef4189b641f7b16f88ab029472ff2947b269fe45282675b066a3b0d9424.scope: Deactivated successfully.
Oct 10 23:18:41 np0005480824 podman[74618]: 2025-10-11 03:18:41.394397163 +0000 UTC m=+0.634408388 container died ead08ef4189b641f7b16f88ab029472ff2947b269fe45282675b066a3b0d9424 (image=quay.io/ceph/ceph:v18, name=condescending_visvesvaraya, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:18:41 np0005480824 systemd[1]: var-lib-containers-storage-overlay-754aa6e15bcef8eb101fab76a8abe8a9bc31d0d25283ce797531c0d22632a6fc-merged.mount: Deactivated successfully.
Oct 10 23:18:41 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:41.460+0000 7ff2c3f8e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 23:18:41 np0005480824 ceph-mgr[74617]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 23:18:41 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'cephadm'
Oct 10 23:18:41 np0005480824 podman[74618]: 2025-10-11 03:18:41.488096297 +0000 UTC m=+0.728107462 container remove ead08ef4189b641f7b16f88ab029472ff2947b269fe45282675b066a3b0d9424 (image=quay.io/ceph/ceph:v18, name=condescending_visvesvaraya, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 10 23:18:41 np0005480824 systemd[1]: libpod-conmon-ead08ef4189b641f7b16f88ab029472ff2947b269fe45282675b066a3b0d9424.scope: Deactivated successfully.
Oct 10 23:18:43 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'crash'
Oct 10 23:18:43 np0005480824 podman[74707]: 2025-10-11 03:18:43.593340973 +0000 UTC m=+0.071026134 container create 729a21f4e24f31ab5c07109e392f3e9463beee58befc25ab8179c4cb51597c42 (image=quay.io/ceph/ceph:v18, name=adoring_cori, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 23:18:43 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:43.615+0000 7ff2c3f8e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 23:18:43 np0005480824 ceph-mgr[74617]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 23:18:43 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'dashboard'
Oct 10 23:18:43 np0005480824 systemd[1]: Started libpod-conmon-729a21f4e24f31ab5c07109e392f3e9463beee58befc25ab8179c4cb51597c42.scope.
Oct 10 23:18:43 np0005480824 podman[74707]: 2025-10-11 03:18:43.565199278 +0000 UTC m=+0.042884499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:43 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98bc256ed5c862e776d5458910e407ec4ab0ca656dd59e52c0cdf44023ac51e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98bc256ed5c862e776d5458910e407ec4ab0ca656dd59e52c0cdf44023ac51e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98bc256ed5c862e776d5458910e407ec4ab0ca656dd59e52c0cdf44023ac51e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:43 np0005480824 podman[74707]: 2025-10-11 03:18:43.700133984 +0000 UTC m=+0.177819215 container init 729a21f4e24f31ab5c07109e392f3e9463beee58befc25ab8179c4cb51597c42 (image=quay.io/ceph/ceph:v18, name=adoring_cori, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 23:18:43 np0005480824 podman[74707]: 2025-10-11 03:18:43.711359762 +0000 UTC m=+0.189044923 container start 729a21f4e24f31ab5c07109e392f3e9463beee58befc25ab8179c4cb51597c42 (image=quay.io/ceph/ceph:v18, name=adoring_cori, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:43 np0005480824 podman[74707]: 2025-10-11 03:18:43.716366335 +0000 UTC m=+0.194051646 container attach 729a21f4e24f31ab5c07109e392f3e9463beee58befc25ab8179c4cb51597c42 (image=quay.io/ceph/ceph:v18, name=adoring_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 23:18:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 10 23:18:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/582072683' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 23:18:44 np0005480824 adoring_cori[74724]: 
Oct 10 23:18:44 np0005480824 adoring_cori[74724]: {
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    "fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    "health": {
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "status": "HEALTH_OK",
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "checks": {},
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "mutes": []
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    },
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    "election_epoch": 5,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    "quorum": [
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        0
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    ],
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    "quorum_names": [
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "compute-0"
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    ],
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    "quorum_age": 5,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    "monmap": {
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "epoch": 1,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "min_mon_release_name": "reef",
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "num_mons": 1
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    },
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    "osdmap": {
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "epoch": 1,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "num_osds": 0,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "num_up_osds": 0,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "osd_up_since": 0,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "num_in_osds": 0,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "osd_in_since": 0,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "num_remapped_pgs": 0
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    },
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    "pgmap": {
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "pgs_by_state": [],
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "num_pgs": 0,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "num_pools": 0,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "num_objects": 0,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "data_bytes": 0,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "bytes_used": 0,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "bytes_avail": 0,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "bytes_total": 0
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    },
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    "fsmap": {
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "epoch": 1,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "by_rank": [],
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "up:standby": 0
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    },
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    "mgrmap": {
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "available": false,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "num_standbys": 0,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "modules": [
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:            "iostat",
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:            "nfs",
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:            "restful"
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        ],
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "services": {}
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    },
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    "servicemap": {
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "epoch": 1,
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "modified": "2025-10-11T03:18:35.640728+0000",
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:        "services": {}
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    },
Oct 10 23:18:44 np0005480824 adoring_cori[74724]:    "progress_events": {}
Oct 10 23:18:44 np0005480824 adoring_cori[74724]: }
Oct 10 23:18:44 np0005480824 systemd[1]: libpod-729a21f4e24f31ab5c07109e392f3e9463beee58befc25ab8179c4cb51597c42.scope: Deactivated successfully.
Oct 10 23:18:44 np0005480824 podman[74750]: 2025-10-11 03:18:44.237121179 +0000 UTC m=+0.037331391 container died 729a21f4e24f31ab5c07109e392f3e9463beee58befc25ab8179c4cb51597c42 (image=quay.io/ceph/ceph:v18, name=adoring_cori, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:18:44 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d98bc256ed5c862e776d5458910e407ec4ab0ca656dd59e52c0cdf44023ac51e-merged.mount: Deactivated successfully.
Oct 10 23:18:44 np0005480824 podman[74750]: 2025-10-11 03:18:44.491385348 +0000 UTC m=+0.291595540 container remove 729a21f4e24f31ab5c07109e392f3e9463beee58befc25ab8179c4cb51597c42 (image=quay.io/ceph/ceph:v18, name=adoring_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 23:18:44 np0005480824 systemd[1]: libpod-conmon-729a21f4e24f31ab5c07109e392f3e9463beee58befc25ab8179c4cb51597c42.scope: Deactivated successfully.
Oct 10 23:18:45 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'devicehealth'
Oct 10 23:18:45 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:45.286+0000 7ff2c3f8e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 23:18:45 np0005480824 ceph-mgr[74617]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 23:18:45 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 23:18:45 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 23:18:45 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 23:18:45 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]:  from numpy import show_config as show_numpy_config
Oct 10 23:18:45 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:45.804+0000 7ff2c3f8e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 23:18:45 np0005480824 ceph-mgr[74617]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 23:18:45 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'influx'
Oct 10 23:18:46 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:46.041+0000 7ff2c3f8e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 23:18:46 np0005480824 ceph-mgr[74617]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 23:18:46 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'insights'
Oct 10 23:18:46 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'iostat'
Oct 10 23:18:46 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:46.493+0000 7ff2c3f8e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 23:18:46 np0005480824 ceph-mgr[74617]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 23:18:46 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'k8sevents'
Oct 10 23:18:46 np0005480824 podman[74765]: 2025-10-11 03:18:46.609778373 +0000 UTC m=+0.069344489 container create bb80f4fb13b3cb01112cca505070124f6d3ca3ee20114d42f39bb6a0c2022482 (image=quay.io/ceph/ceph:v18, name=nice_gagarin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 10 23:18:46 np0005480824 systemd[1]: Started libpod-conmon-bb80f4fb13b3cb01112cca505070124f6d3ca3ee20114d42f39bb6a0c2022482.scope.
Oct 10 23:18:46 np0005480824 podman[74765]: 2025-10-11 03:18:46.581494624 +0000 UTC m=+0.041060750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:46 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9580b6e7dfcc4e10b3ff127d29e94956ef5f0f5569f5f566ec4ef050700a09dd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9580b6e7dfcc4e10b3ff127d29e94956ef5f0f5569f5f566ec4ef050700a09dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9580b6e7dfcc4e10b3ff127d29e94956ef5f0f5569f5f566ec4ef050700a09dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:46 np0005480824 podman[74765]: 2025-10-11 03:18:46.713709518 +0000 UTC m=+0.173275684 container init bb80f4fb13b3cb01112cca505070124f6d3ca3ee20114d42f39bb6a0c2022482 (image=quay.io/ceph/ceph:v18, name=nice_gagarin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:18:46 np0005480824 podman[74765]: 2025-10-11 03:18:46.724762001 +0000 UTC m=+0.184328107 container start bb80f4fb13b3cb01112cca505070124f6d3ca3ee20114d42f39bb6a0c2022482 (image=quay.io/ceph/ceph:v18, name=nice_gagarin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:18:46 np0005480824 podman[74765]: 2025-10-11 03:18:46.72886875 +0000 UTC m=+0.188434926 container attach bb80f4fb13b3cb01112cca505070124f6d3ca3ee20114d42f39bb6a0c2022482 (image=quay.io/ceph/ceph:v18, name=nice_gagarin, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 10 23:18:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 10 23:18:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3088091662' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]: 
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]: {
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    "fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    "health": {
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "status": "HEALTH_OK",
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "checks": {},
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "mutes": []
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    },
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    "election_epoch": 5,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    "quorum": [
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        0
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    ],
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    "quorum_names": [
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "compute-0"
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    ],
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    "quorum_age": 8,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    "monmap": {
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "epoch": 1,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "min_mon_release_name": "reef",
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "num_mons": 1
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    },
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    "osdmap": {
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "epoch": 1,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "num_osds": 0,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "num_up_osds": 0,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "osd_up_since": 0,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "num_in_osds": 0,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "osd_in_since": 0,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "num_remapped_pgs": 0
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    },
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    "pgmap": {
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "pgs_by_state": [],
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "num_pgs": 0,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "num_pools": 0,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "num_objects": 0,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "data_bytes": 0,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "bytes_used": 0,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "bytes_avail": 0,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "bytes_total": 0
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    },
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    "fsmap": {
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "epoch": 1,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "by_rank": [],
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "up:standby": 0
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    },
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    "mgrmap": {
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "available": false,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "num_standbys": 0,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "modules": [
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:            "iostat",
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:            "nfs",
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:            "restful"
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        ],
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "services": {}
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    },
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    "servicemap": {
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "epoch": 1,
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "modified": "2025-10-11T03:18:35.640728+0000",
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:        "services": {}
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    },
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]:    "progress_events": {}
Oct 10 23:18:47 np0005480824 nice_gagarin[74781]: }
Oct 10 23:18:47 np0005480824 systemd[1]: libpod-bb80f4fb13b3cb01112cca505070124f6d3ca3ee20114d42f39bb6a0c2022482.scope: Deactivated successfully.
Oct 10 23:18:47 np0005480824 podman[74807]: 2025-10-11 03:18:47.155848948 +0000 UTC m=+0.023658318 container died bb80f4fb13b3cb01112cca505070124f6d3ca3ee20114d42f39bb6a0c2022482 (image=quay.io/ceph/ceph:v18, name=nice_gagarin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 10 23:18:47 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9580b6e7dfcc4e10b3ff127d29e94956ef5f0f5569f5f566ec4ef050700a09dd-merged.mount: Deactivated successfully.
Oct 10 23:18:47 np0005480824 podman[74807]: 2025-10-11 03:18:47.195938131 +0000 UTC m=+0.063747501 container remove bb80f4fb13b3cb01112cca505070124f6d3ca3ee20114d42f39bb6a0c2022482 (image=quay.io/ceph/ceph:v18, name=nice_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:18:47 np0005480824 systemd[1]: libpod-conmon-bb80f4fb13b3cb01112cca505070124f6d3ca3ee20114d42f39bb6a0c2022482.scope: Deactivated successfully.
Oct 10 23:18:48 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'localpool'
Oct 10 23:18:48 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 23:18:49 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'mirroring'
Oct 10 23:18:49 np0005480824 podman[74822]: 2025-10-11 03:18:49.266369315 +0000 UTC m=+0.041544703 container create 7f9946838a1acafe0305307db7962a881f9d623d404f335c65fc4891cd90d3c9 (image=quay.io/ceph/ceph:v18, name=recursing_cray, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:18:49 np0005480824 systemd[1]: Started libpod-conmon-7f9946838a1acafe0305307db7962a881f9d623d404f335c65fc4891cd90d3c9.scope.
Oct 10 23:18:49 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'nfs'
Oct 10 23:18:49 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:49 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23954280ff57e893f59a04d7004159b830518109b50641c8f87526b75044f2f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:49 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23954280ff57e893f59a04d7004159b830518109b50641c8f87526b75044f2f5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:49 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23954280ff57e893f59a04d7004159b830518109b50641c8f87526b75044f2f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:49 np0005480824 podman[74822]: 2025-10-11 03:18:49.335909398 +0000 UTC m=+0.111084766 container init 7f9946838a1acafe0305307db7962a881f9d623d404f335c65fc4891cd90d3c9 (image=quay.io/ceph/ceph:v18, name=recursing_cray, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 10 23:18:49 np0005480824 podman[74822]: 2025-10-11 03:18:49.243479788 +0000 UTC m=+0.018655166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:49 np0005480824 podman[74822]: 2025-10-11 03:18:49.342134933 +0000 UTC m=+0.117310311 container start 7f9946838a1acafe0305307db7962a881f9d623d404f335c65fc4891cd90d3c9 (image=quay.io/ceph/ceph:v18, name=recursing_cray, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 23:18:49 np0005480824 podman[74822]: 2025-10-11 03:18:49.347294369 +0000 UTC m=+0.122469717 container attach 7f9946838a1acafe0305307db7962a881f9d623d404f335c65fc4891cd90d3c9 (image=quay.io/ceph/ceph:v18, name=recursing_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:18:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 10 23:18:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2910587701' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 23:18:49 np0005480824 recursing_cray[74838]: 
Oct 10 23:18:49 np0005480824 recursing_cray[74838]: {
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    "fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    "health": {
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "status": "HEALTH_OK",
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "checks": {},
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "mutes": []
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    },
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    "election_epoch": 5,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    "quorum": [
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        0
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    ],
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    "quorum_names": [
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "compute-0"
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    ],
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    "quorum_age": 11,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    "monmap": {
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "epoch": 1,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "min_mon_release_name": "reef",
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "num_mons": 1
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    },
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    "osdmap": {
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "epoch": 1,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "num_osds": 0,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "num_up_osds": 0,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "osd_up_since": 0,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "num_in_osds": 0,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "osd_in_since": 0,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "num_remapped_pgs": 0
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    },
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    "pgmap": {
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "pgs_by_state": [],
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "num_pgs": 0,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "num_pools": 0,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "num_objects": 0,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "data_bytes": 0,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "bytes_used": 0,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "bytes_avail": 0,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "bytes_total": 0
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    },
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    "fsmap": {
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "epoch": 1,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "by_rank": [],
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "up:standby": 0
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    },
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    "mgrmap": {
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "available": false,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "num_standbys": 0,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "modules": [
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:            "iostat",
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:            "nfs",
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:            "restful"
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        ],
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "services": {}
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    },
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    "servicemap": {
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "epoch": 1,
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "modified": "2025-10-11T03:18:35.640728+0000",
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:        "services": {}
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    },
Oct 10 23:18:49 np0005480824 recursing_cray[74838]:    "progress_events": {}
Oct 10 23:18:49 np0005480824 recursing_cray[74838]: }
Oct 10 23:18:49 np0005480824 systemd[1]: libpod-7f9946838a1acafe0305307db7962a881f9d623d404f335c65fc4891cd90d3c9.scope: Deactivated successfully.
Oct 10 23:18:49 np0005480824 podman[74822]: 2025-10-11 03:18:49.737567085 +0000 UTC m=+0.512742443 container died 7f9946838a1acafe0305307db7962a881f9d623d404f335c65fc4891cd90d3c9 (image=quay.io/ceph/ceph:v18, name=recursing_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:18:49 np0005480824 systemd[1]: var-lib-containers-storage-overlay-23954280ff57e893f59a04d7004159b830518109b50641c8f87526b75044f2f5-merged.mount: Deactivated successfully.
Oct 10 23:18:49 np0005480824 podman[74822]: 2025-10-11 03:18:49.797775071 +0000 UTC m=+0.572950419 container remove 7f9946838a1acafe0305307db7962a881f9d623d404f335c65fc4891cd90d3c9 (image=quay.io/ceph/ceph:v18, name=recursing_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:18:49 np0005480824 systemd[1]: libpod-conmon-7f9946838a1acafe0305307db7962a881f9d623d404f335c65fc4891cd90d3c9.scope: Deactivated successfully.
Oct 10 23:18:50 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:49.999+0000 7ff2c3f8e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 23:18:50 np0005480824 ceph-mgr[74617]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 23:18:50 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'orchestrator'
Oct 10 23:18:50 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:50.659+0000 7ff2c3f8e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 23:18:50 np0005480824 ceph-mgr[74617]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 23:18:50 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 23:18:50 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:50.925+0000 7ff2c3f8e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 23:18:50 np0005480824 ceph-mgr[74617]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 23:18:50 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'osd_support'
Oct 10 23:18:51 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:51.157+0000 7ff2c3f8e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 23:18:51 np0005480824 ceph-mgr[74617]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 23:18:51 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 23:18:51 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:51.414+0000 7ff2c3f8e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 23:18:51 np0005480824 ceph-mgr[74617]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 23:18:51 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'progress'
Oct 10 23:18:51 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:51.646+0000 7ff2c3f8e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 23:18:51 np0005480824 ceph-mgr[74617]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 23:18:51 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'prometheus'
Oct 10 23:18:51 np0005480824 podman[74877]: 2025-10-11 03:18:51.917418129 +0000 UTC m=+0.093368547 container create f6c012f9f9ed3ab8900cd98310b20c2143a8c2681d84c1122183436f05daabd4 (image=quay.io/ceph/ceph:v18, name=practical_wilson, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 10 23:18:51 np0005480824 podman[74877]: 2025-10-11 03:18:51.8499306 +0000 UTC m=+0.025881028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:51 np0005480824 systemd[1]: Started libpod-conmon-f6c012f9f9ed3ab8900cd98310b20c2143a8c2681d84c1122183436f05daabd4.scope.
Oct 10 23:18:51 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/564dedf83cbbd15e9d5212bbe83c790f2cfe2f5ee9a34977686495f8f54e7a3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/564dedf83cbbd15e9d5212bbe83c790f2cfe2f5ee9a34977686495f8f54e7a3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/564dedf83cbbd15e9d5212bbe83c790f2cfe2f5ee9a34977686495f8f54e7a3e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:52 np0005480824 podman[74877]: 2025-10-11 03:18:52.017695897 +0000 UTC m=+0.193646305 container init f6c012f9f9ed3ab8900cd98310b20c2143a8c2681d84c1122183436f05daabd4 (image=quay.io/ceph/ceph:v18, name=practical_wilson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:18:52 np0005480824 podman[74877]: 2025-10-11 03:18:52.025843252 +0000 UTC m=+0.201793660 container start f6c012f9f9ed3ab8900cd98310b20c2143a8c2681d84c1122183436f05daabd4 (image=quay.io/ceph/ceph:v18, name=practical_wilson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:18:52 np0005480824 podman[74877]: 2025-10-11 03:18:52.030343913 +0000 UTC m=+0.206294311 container attach f6c012f9f9ed3ab8900cd98310b20c2143a8c2681d84c1122183436f05daabd4 (image=quay.io/ceph/ceph:v18, name=practical_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 10 23:18:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 10 23:18:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322442774' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 23:18:52 np0005480824 practical_wilson[74893]: 
Oct 10 23:18:52 np0005480824 practical_wilson[74893]: {
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    "fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    "health": {
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "status": "HEALTH_OK",
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "checks": {},
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "mutes": []
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    },
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    "election_epoch": 5,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    "quorum": [
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        0
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    ],
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    "quorum_names": [
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "compute-0"
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    ],
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    "quorum_age": 13,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    "monmap": {
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "epoch": 1,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "min_mon_release_name": "reef",
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "num_mons": 1
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    },
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    "osdmap": {
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "epoch": 1,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "num_osds": 0,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "num_up_osds": 0,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "osd_up_since": 0,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "num_in_osds": 0,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "osd_in_since": 0,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "num_remapped_pgs": 0
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    },
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    "pgmap": {
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "pgs_by_state": [],
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "num_pgs": 0,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "num_pools": 0,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "num_objects": 0,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "data_bytes": 0,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "bytes_used": 0,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "bytes_avail": 0,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "bytes_total": 0
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    },
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    "fsmap": {
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "epoch": 1,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "by_rank": [],
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "up:standby": 0
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    },
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    "mgrmap": {
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "available": false,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "num_standbys": 0,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "modules": [
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:            "iostat",
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:            "nfs",
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:            "restful"
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        ],
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "services": {}
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    },
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    "servicemap": {
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "epoch": 1,
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "modified": "2025-10-11T03:18:35.640728+0000",
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:        "services": {}
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    },
Oct 10 23:18:52 np0005480824 practical_wilson[74893]:    "progress_events": {}
Oct 10 23:18:52 np0005480824 practical_wilson[74893]: }
Oct 10 23:18:52 np0005480824 systemd[1]: libpod-f6c012f9f9ed3ab8900cd98310b20c2143a8c2681d84c1122183436f05daabd4.scope: Deactivated successfully.
Oct 10 23:18:52 np0005480824 podman[74877]: 2025-10-11 03:18:52.423295848 +0000 UTC m=+0.599246246 container died f6c012f9f9ed3ab8900cd98310b20c2143a8c2681d84c1122183436f05daabd4 (image=quay.io/ceph/ceph:v18, name=practical_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 23:18:52 np0005480824 systemd[1]: var-lib-containers-storage-overlay-564dedf83cbbd15e9d5212bbe83c790f2cfe2f5ee9a34977686495f8f54e7a3e-merged.mount: Deactivated successfully.
Oct 10 23:18:52 np0005480824 podman[74877]: 2025-10-11 03:18:52.484561203 +0000 UTC m=+0.660511611 container remove f6c012f9f9ed3ab8900cd98310b20c2143a8c2681d84c1122183436f05daabd4 (image=quay.io/ceph/ceph:v18, name=practical_wilson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 23:18:52 np0005480824 systemd[1]: libpod-conmon-f6c012f9f9ed3ab8900cd98310b20c2143a8c2681d84c1122183436f05daabd4.scope: Deactivated successfully.
Oct 10 23:18:52 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:52.671+0000 7ff2c3f8e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 23:18:52 np0005480824 ceph-mgr[74617]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 23:18:52 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'rbd_support'
Oct 10 23:18:52 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:52.962+0000 7ff2c3f8e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 23:18:52 np0005480824 ceph-mgr[74617]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 23:18:52 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'restful'
Oct 10 23:18:53 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'rgw'
Oct 10 23:18:54 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:54.342+0000 7ff2c3f8e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 23:18:54 np0005480824 ceph-mgr[74617]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 23:18:54 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'rook'
Oct 10 23:18:54 np0005480824 podman[74932]: 2025-10-11 03:18:54.591393622 +0000 UTC m=+0.074898827 container create 66f0283ce773fd7ee341520c7510402690492f7f54175b6287bb8684c7f9a2f8 (image=quay.io/ceph/ceph:v18, name=silly_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:18:54 np0005480824 systemd[1]: Started libpod-conmon-66f0283ce773fd7ee341520c7510402690492f7f54175b6287bb8684c7f9a2f8.scope.
Oct 10 23:18:54 np0005480824 podman[74932]: 2025-10-11 03:18:54.556127257 +0000 UTC m=+0.039632562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:54 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcde6872a96021a527a21eb1360f71672566b96e77a60400c297837f03655d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcde6872a96021a527a21eb1360f71672566b96e77a60400c297837f03655d2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcde6872a96021a527a21eb1360f71672566b96e77a60400c297837f03655d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:54 np0005480824 podman[74932]: 2025-10-11 03:18:54.699323323 +0000 UTC m=+0.182828618 container init 66f0283ce773fd7ee341520c7510402690492f7f54175b6287bb8684c7f9a2f8 (image=quay.io/ceph/ceph:v18, name=silly_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 10 23:18:54 np0005480824 podman[74932]: 2025-10-11 03:18:54.707780557 +0000 UTC m=+0.191285792 container start 66f0283ce773fd7ee341520c7510402690492f7f54175b6287bb8684c7f9a2f8 (image=quay.io/ceph/ceph:v18, name=silly_chandrasekhar, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:18:54 np0005480824 podman[74932]: 2025-10-11 03:18:54.712308987 +0000 UTC m=+0.195814232 container attach 66f0283ce773fd7ee341520c7510402690492f7f54175b6287bb8684c7f9a2f8 (image=quay.io/ceph/ceph:v18, name=silly_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 10 23:18:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 10 23:18:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019969563' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]: 
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]: {
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    "fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    "health": {
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "status": "HEALTH_OK",
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "checks": {},
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "mutes": []
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    },
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    "election_epoch": 5,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    "quorum": [
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        0
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    ],
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    "quorum_names": [
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "compute-0"
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    ],
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    "quorum_age": 16,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    "monmap": {
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "epoch": 1,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "min_mon_release_name": "reef",
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "num_mons": 1
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    },
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    "osdmap": {
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "epoch": 1,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "num_osds": 0,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "num_up_osds": 0,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "osd_up_since": 0,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "num_in_osds": 0,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "osd_in_since": 0,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "num_remapped_pgs": 0
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    },
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    "pgmap": {
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "pgs_by_state": [],
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "num_pgs": 0,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "num_pools": 0,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "num_objects": 0,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "data_bytes": 0,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "bytes_used": 0,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "bytes_avail": 0,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "bytes_total": 0
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    },
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    "fsmap": {
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "epoch": 1,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "by_rank": [],
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "up:standby": 0
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    },
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    "mgrmap": {
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "available": false,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "num_standbys": 0,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "modules": [
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:            "iostat",
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:            "nfs",
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:            "restful"
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        ],
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "services": {}
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    },
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    "servicemap": {
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "epoch": 1,
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "modified": "2025-10-11T03:18:35.640728+0000",
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:        "services": {}
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    },
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]:    "progress_events": {}
Oct 10 23:18:55 np0005480824 silly_chandrasekhar[74948]: }
Oct 10 23:18:55 np0005480824 systemd[1]: libpod-66f0283ce773fd7ee341520c7510402690492f7f54175b6287bb8684c7f9a2f8.scope: Deactivated successfully.
Oct 10 23:18:55 np0005480824 podman[74974]: 2025-10-11 03:18:55.206717593 +0000 UTC m=+0.040447734 container died 66f0283ce773fd7ee341520c7510402690492f7f54175b6287bb8684c7f9a2f8 (image=quay.io/ceph/ceph:v18, name=silly_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 23:18:55 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0dcde6872a96021a527a21eb1360f71672566b96e77a60400c297837f03655d2-merged.mount: Deactivated successfully.
Oct 10 23:18:55 np0005480824 podman[74974]: 2025-10-11 03:18:55.262231025 +0000 UTC m=+0.095961166 container remove 66f0283ce773fd7ee341520c7510402690492f7f54175b6287bb8684c7f9a2f8 (image=quay.io/ceph/ceph:v18, name=silly_chandrasekhar, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 10 23:18:55 np0005480824 systemd[1]: libpod-conmon-66f0283ce773fd7ee341520c7510402690492f7f54175b6287bb8684c7f9a2f8.scope: Deactivated successfully.
Oct 10 23:18:56 np0005480824 ceph-mgr[74617]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 23:18:56 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'selftest'
Oct 10 23:18:56 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:56.389+0000 7ff2c3f8e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 23:18:56 np0005480824 ceph-mgr[74617]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 23:18:56 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'snap_schedule'
Oct 10 23:18:56 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:56.625+0000 7ff2c3f8e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 23:18:56 np0005480824 ceph-mgr[74617]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 23:18:56 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'stats'
Oct 10 23:18:56 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:56.866+0000 7ff2c3f8e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 23:18:57 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'status'
Oct 10 23:18:57 np0005480824 ceph-mgr[74617]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 23:18:57 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'telegraf'
Oct 10 23:18:57 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:57.373+0000 7ff2c3f8e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 23:18:57 np0005480824 podman[74989]: 2025-10-11 03:18:57.338048771 +0000 UTC m=+0.039052256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:18:57 np0005480824 podman[74989]: 2025-10-11 03:18:57.475056642 +0000 UTC m=+0.176060077 container create 1427b1aaaf7901f8e864a5ad3fce0abd074e404e1ede665f3c38b07d29ee317d (image=quay.io/ceph/ceph:v18, name=wizardly_stonebraker, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 23:18:57 np0005480824 systemd[1]: Started libpod-conmon-1427b1aaaf7901f8e864a5ad3fce0abd074e404e1ede665f3c38b07d29ee317d.scope.
Oct 10 23:18:57 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:18:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87686618669fc4257faa2cebdb322b33c7c04573791d9e8c69cf2f514ec292c6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87686618669fc4257faa2cebdb322b33c7c04573791d9e8c69cf2f514ec292c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87686618669fc4257faa2cebdb322b33c7c04573791d9e8c69cf2f514ec292c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:18:57 np0005480824 ceph-mgr[74617]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 23:18:57 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'telemetry'
Oct 10 23:18:57 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:57.602+0000 7ff2c3f8e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 23:18:58 np0005480824 ceph-mgr[74617]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 23:18:58 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 23:18:58 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:58.212+0000 7ff2c3f8e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 23:18:58 np0005480824 ceph-mgr[74617]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 23:18:58 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'volumes'
Oct 10 23:18:58 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:58.881+0000 7ff2c3f8e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 23:18:59 np0005480824 ceph-mgr[74617]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 23:18:59 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:59.579+0000 7ff2c3f8e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 23:18:59 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'zabbix'
Oct 10 23:18:59 np0005480824 ceph-mgr[74617]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 23:18:59 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:18:59.824+0000 7ff2c3f8e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 23:18:59 np0005480824 ceph-mgr[74617]: ms_deliver_dispatch: unhandled message 0x55ded463f1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 10 23:18:59 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.pdyrua
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr handle_mgr_map Activating!
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr handle_mgr_map I am now activating
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.pdyrua(active, starting, since 1.28456s)
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).mds e1 all = 1
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.pdyrua", "id": "compute-0.pdyrua"} v 0) v1
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "mgr metadata", "who": "compute-0.pdyrua", "id": "compute-0.pdyrua"}]: dispatch
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: balancer
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [balancer INFO root] Starting
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: crash
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:19:01
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : Manager daemon compute-0.pdyrua is now available
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [balancer INFO root] No pools available
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: devicehealth
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [devicehealth INFO root] Starting
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: iostat
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: nfs
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: orchestrator
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: pg_autoscaler
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: progress
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [progress INFO root] Loading...
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [progress INFO root] No stored events to load
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [progress INFO root] Loaded [] historic events
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [progress INFO root] Loaded OSDMap, ready.
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] recovery thread starting
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] starting setup
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: rbd_support
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: restful
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [restful INFO root] server_addr: :: server_port: 8003
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: status
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdyrua/mirror_snapshot_schedule"} v 0) v1
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdyrua/mirror_snapshot_schedule"}]: dispatch
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: telemetry
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [restful WARNING root] server not running: no certificate configured
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] PerfHandler: starting
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TaskHandler: starting
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdyrua/trash_purge_schedule"} v 0) v1
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdyrua/trash_purge_schedule"}]: dispatch
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: volumes
Oct 10 23:19:01 np0005480824 podman[74989]: 2025-10-11 03:19:01.70117215 +0000 UTC m=+4.402175625 container init 1427b1aaaf7901f8e864a5ad3fce0abd074e404e1ede665f3c38b07d29ee317d (image=quay.io/ceph/ceph:v18, name=wizardly_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: Activating manager daemon compute-0.pdyrua
Oct 10 23:19:01 np0005480824 podman[74989]: 2025-10-11 03:19:01.710298221 +0000 UTC m=+4.411301616 container start 1427b1aaaf7901f8e864a5ad3fce0abd074e404e1ede665f3c38b07d29ee317d (image=quay.io/ceph/ceph:v18, name=wizardly_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 23:19:01 np0005480824 podman[74989]: 2025-10-11 03:19:01.718094048 +0000 UTC m=+4.419097473 container attach 1427b1aaaf7901f8e864a5ad3fce0abd074e404e1ede665f3c38b07d29ee317d (image=quay.io/ceph/ceph:v18, name=wizardly_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 10 23:19:01 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] setup complete
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct 10 23:19:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 10 23:19:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1466050662' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]: 
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]: {
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    "fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    "health": {
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "status": "HEALTH_OK",
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "checks": {},
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "mutes": []
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    },
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    "election_epoch": 5,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    "quorum": [
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        0
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    ],
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    "quorum_names": [
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "compute-0"
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    ],
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    "quorum_age": 23,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    "monmap": {
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "epoch": 1,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "min_mon_release_name": "reef",
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "num_mons": 1
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    },
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    "osdmap": {
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "epoch": 1,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "num_osds": 0,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "num_up_osds": 0,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "osd_up_since": 0,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "num_in_osds": 0,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "osd_in_since": 0,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "num_remapped_pgs": 0
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    },
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    "pgmap": {
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "pgs_by_state": [],
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "num_pgs": 0,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "num_pools": 0,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "num_objects": 0,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "data_bytes": 0,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "bytes_used": 0,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "bytes_avail": 0,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "bytes_total": 0
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    },
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    "fsmap": {
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "epoch": 1,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "by_rank": [],
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "up:standby": 0
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    },
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    "mgrmap": {
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "available": false,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "num_standbys": 0,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "modules": [
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:            "iostat",
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:            "nfs",
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:            "restful"
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        ],
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "services": {}
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    },
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    "servicemap": {
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "epoch": 1,
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "modified": "2025-10-11T03:18:35.640728+0000",
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:        "services": {}
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    },
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]:    "progress_events": {}
Oct 10 23:19:02 np0005480824 wizardly_stonebraker[75006]: }
Oct 10 23:19:02 np0005480824 systemd[1]: libpod-1427b1aaaf7901f8e864a5ad3fce0abd074e404e1ede665f3c38b07d29ee317d.scope: Deactivated successfully.
Oct 10 23:19:02 np0005480824 podman[74989]: 2025-10-11 03:19:02.123202056 +0000 UTC m=+4.824205461 container died 1427b1aaaf7901f8e864a5ad3fce0abd074e404e1ede665f3c38b07d29ee317d (image=quay.io/ceph/ceph:v18, name=wizardly_stonebraker, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:19:02 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.pdyrua(active, since 2s)
Oct 10 23:19:02 np0005480824 systemd[1]: var-lib-containers-storage-overlay-87686618669fc4257faa2cebdb322b33c7c04573791d9e8c69cf2f514ec292c6-merged.mount: Deactivated successfully.
Oct 10 23:19:02 np0005480824 podman[74989]: 2025-10-11 03:19:02.175670488 +0000 UTC m=+4.876673903 container remove 1427b1aaaf7901f8e864a5ad3fce0abd074e404e1ede665f3c38b07d29ee317d (image=quay.io/ceph/ceph:v18, name=wizardly_stonebraker, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:19:02 np0005480824 systemd[1]: libpod-conmon-1427b1aaaf7901f8e864a5ad3fce0abd074e404e1ede665f3c38b07d29ee317d.scope: Deactivated successfully.
Oct 10 23:19:02 np0005480824 ceph-mon[74326]: Manager daemon compute-0.pdyrua is now available
Oct 10 23:19:02 np0005480824 ceph-mon[74326]: from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdyrua/mirror_snapshot_schedule"}]: dispatch
Oct 10 23:19:02 np0005480824 ceph-mon[74326]: from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdyrua/trash_purge_schedule"}]: dispatch
Oct 10 23:19:02 np0005480824 ceph-mon[74326]: from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:02 np0005480824 ceph-mon[74326]: from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:02 np0005480824 ceph-mon[74326]: from='mgr.14102 192.168.122.100:0/4175441126' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:03 np0005480824 ceph-mgr[74617]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 23:19:04 np0005480824 podman[75125]: 2025-10-11 03:19:04.263505382 +0000 UTC m=+0.054468685 container create d66716ab79f51771422ff09c9a2a65b971853aaa05d6a565792f26516de9a665 (image=quay.io/ceph/ceph:v18, name=sharp_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 23:19:04 np0005480824 systemd[1]: Started libpod-conmon-d66716ab79f51771422ff09c9a2a65b971853aaa05d6a565792f26516de9a665.scope.
Oct 10 23:19:04 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1d5165f5d0165143cb5df8a3d8a4ce5eddcaa108b0c33a844418b49e981285a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1d5165f5d0165143cb5df8a3d8a4ce5eddcaa108b0c33a844418b49e981285a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1d5165f5d0165143cb5df8a3d8a4ce5eddcaa108b0c33a844418b49e981285a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:04 np0005480824 podman[75125]: 2025-10-11 03:19:04.237963926 +0000 UTC m=+0.028927259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:04 np0005480824 podman[75125]: 2025-10-11 03:19:04.361323195 +0000 UTC m=+0.152286578 container init d66716ab79f51771422ff09c9a2a65b971853aaa05d6a565792f26516de9a665 (image=quay.io/ceph/ceph:v18, name=sharp_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:19:04 np0005480824 podman[75125]: 2025-10-11 03:19:04.367773516 +0000 UTC m=+0.158736799 container start d66716ab79f51771422ff09c9a2a65b971853aaa05d6a565792f26516de9a665 (image=quay.io/ceph/ceph:v18, name=sharp_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 23:19:04 np0005480824 podman[75125]: 2025-10-11 03:19:04.37130043 +0000 UTC m=+0.162263753 container attach d66716ab79f51771422ff09c9a2a65b971853aaa05d6a565792f26516de9a665 (image=quay.io/ceph/ceph:v18, name=sharp_khorana, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 10 23:19:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 10 23:19:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2629262204' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]: 
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]: {
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    "fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    "health": {
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "status": "HEALTH_OK",
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "checks": {},
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "mutes": []
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    },
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    "election_epoch": 5,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    "quorum": [
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        0
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    ],
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    "quorum_names": [
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "compute-0"
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    ],
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    "quorum_age": 26,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    "monmap": {
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "epoch": 1,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "min_mon_release_name": "reef",
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "num_mons": 1
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    },
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    "osdmap": {
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "epoch": 1,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "num_osds": 0,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "num_up_osds": 0,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "osd_up_since": 0,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "num_in_osds": 0,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "osd_in_since": 0,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "num_remapped_pgs": 0
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    },
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    "pgmap": {
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "pgs_by_state": [],
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "num_pgs": 0,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "num_pools": 0,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "num_objects": 0,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "data_bytes": 0,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "bytes_used": 0,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "bytes_avail": 0,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "bytes_total": 0
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    },
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    "fsmap": {
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "epoch": 1,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "by_rank": [],
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "up:standby": 0
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    },
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    "mgrmap": {
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "available": true,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "num_standbys": 0,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "modules": [
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:            "iostat",
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:            "nfs",
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:            "restful"
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        ],
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "services": {}
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    },
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    "servicemap": {
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "epoch": 1,
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "modified": "2025-10-11T03:18:35.640728+0000",
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:        "services": {}
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    },
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]:    "progress_events": {}
Oct 10 23:19:04 np0005480824 sharp_khorana[75141]: }
Oct 10 23:19:04 np0005480824 systemd[1]: libpod-d66716ab79f51771422ff09c9a2a65b971853aaa05d6a565792f26516de9a665.scope: Deactivated successfully.
Oct 10 23:19:04 np0005480824 podman[75125]: 2025-10-11 03:19:04.97601795 +0000 UTC m=+0.766981263 container died d66716ab79f51771422ff09c9a2a65b971853aaa05d6a565792f26516de9a665 (image=quay.io/ceph/ceph:v18, name=sharp_khorana, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:05 np0005480824 ceph-mgr[74617]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 23:19:05 np0005480824 systemd[1]: var-lib-containers-storage-overlay-f1d5165f5d0165143cb5df8a3d8a4ce5eddcaa108b0c33a844418b49e981285a-merged.mount: Deactivated successfully.
Oct 10 23:19:05 np0005480824 podman[75125]: 2025-10-11 03:19:05.608286859 +0000 UTC m=+1.399250142 container remove d66716ab79f51771422ff09c9a2a65b971853aaa05d6a565792f26516de9a665 (image=quay.io/ceph/ceph:v18, name=sharp_khorana, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:19:05 np0005480824 systemd[1]: libpod-conmon-d66716ab79f51771422ff09c9a2a65b971853aaa05d6a565792f26516de9a665.scope: Deactivated successfully.
Oct 10 23:19:05 np0005480824 podman[75182]: 2025-10-11 03:19:05.681235323 +0000 UTC m=+0.055007399 container create 59ef952b6b3682d492699ad7120d885bdac029b212a9e3c0269ea39ad5f28888 (image=quay.io/ceph/ceph:v18, name=lucid_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 23:19:05 np0005480824 systemd[1]: Started libpod-conmon-59ef952b6b3682d492699ad7120d885bdac029b212a9e3c0269ea39ad5f28888.scope.
Oct 10 23:19:05 np0005480824 podman[75182]: 2025-10-11 03:19:05.647215721 +0000 UTC m=+0.020987827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:05 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d231525ab8273a7afd936b1b8c1ffcafda54d512d44ee7f166da2459dbe48d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d231525ab8273a7afd936b1b8c1ffcafda54d512d44ee7f166da2459dbe48d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d231525ab8273a7afd936b1b8c1ffcafda54d512d44ee7f166da2459dbe48d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d231525ab8273a7afd936b1b8c1ffcafda54d512d44ee7f166da2459dbe48d/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:05 np0005480824 podman[75182]: 2025-10-11 03:19:05.84709899 +0000 UTC m=+0.220871066 container init 59ef952b6b3682d492699ad7120d885bdac029b212a9e3c0269ea39ad5f28888 (image=quay.io/ceph/ceph:v18, name=lucid_mcclintock, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:05 np0005480824 podman[75182]: 2025-10-11 03:19:05.856429997 +0000 UTC m=+0.230202063 container start 59ef952b6b3682d492699ad7120d885bdac029b212a9e3c0269ea39ad5f28888 (image=quay.io/ceph/ceph:v18, name=lucid_mcclintock, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:19:05 np0005480824 podman[75182]: 2025-10-11 03:19:05.885854597 +0000 UTC m=+0.259626713 container attach 59ef952b6b3682d492699ad7120d885bdac029b212a9e3c0269ea39ad5f28888 (image=quay.io/ceph/ceph:v18, name=lucid_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 10 23:19:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 10 23:19:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2425004058' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 10 23:19:06 np0005480824 systemd[1]: libpod-59ef952b6b3682d492699ad7120d885bdac029b212a9e3c0269ea39ad5f28888.scope: Deactivated successfully.
Oct 10 23:19:06 np0005480824 conmon[75198]: conmon 59ef952b6b3682d49269 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-59ef952b6b3682d492699ad7120d885bdac029b212a9e3c0269ea39ad5f28888.scope/container/memory.events
Oct 10 23:19:06 np0005480824 podman[75182]: 2025-10-11 03:19:06.402976976 +0000 UTC m=+0.776749042 container died 59ef952b6b3682d492699ad7120d885bdac029b212a9e3c0269ea39ad5f28888 (image=quay.io/ceph/ceph:v18, name=lucid_mcclintock, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:19:06 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/2425004058' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 10 23:19:06 np0005480824 systemd[1]: var-lib-containers-storage-overlay-14d231525ab8273a7afd936b1b8c1ffcafda54d512d44ee7f166da2459dbe48d-merged.mount: Deactivated successfully.
Oct 10 23:19:06 np0005480824 podman[75182]: 2025-10-11 03:19:06.845401133 +0000 UTC m=+1.219173209 container remove 59ef952b6b3682d492699ad7120d885bdac029b212a9e3c0269ea39ad5f28888 (image=quay.io/ceph/ceph:v18, name=lucid_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 23:19:06 np0005480824 systemd[1]: libpod-conmon-59ef952b6b3682d492699ad7120d885bdac029b212a9e3c0269ea39ad5f28888.scope: Deactivated successfully.
Oct 10 23:19:07 np0005480824 podman[75235]: 2025-10-11 03:19:06.915427699 +0000 UTC m=+0.037113155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:07 np0005480824 podman[75235]: 2025-10-11 03:19:07.094318271 +0000 UTC m=+0.216003707 container create 64376e21363ad676beeaa587c021105982f2aaf81aa9da4c0594a135ef648725 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:07 np0005480824 ceph-mgr[74617]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 23:19:07 np0005480824 systemd[1]: Started libpod-conmon-64376e21363ad676beeaa587c021105982f2aaf81aa9da4c0594a135ef648725.scope.
Oct 10 23:19:07 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:07 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e35da09abe5b7bfb0a7e9b804844bf30e0f6a5c66e3cf7881ae3181bda8a161/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:07 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e35da09abe5b7bfb0a7e9b804844bf30e0f6a5c66e3cf7881ae3181bda8a161/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:07 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e35da09abe5b7bfb0a7e9b804844bf30e0f6a5c66e3cf7881ae3181bda8a161/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:07 np0005480824 podman[75235]: 2025-10-11 03:19:07.434750696 +0000 UTC m=+0.556436192 container init 64376e21363ad676beeaa587c021105982f2aaf81aa9da4c0594a135ef648725 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:19:07 np0005480824 podman[75235]: 2025-10-11 03:19:07.443646361 +0000 UTC m=+0.565331777 container start 64376e21363ad676beeaa587c021105982f2aaf81aa9da4c0594a135ef648725 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:07 np0005480824 podman[75235]: 2025-10-11 03:19:07.567949036 +0000 UTC m=+0.689634542 container attach 64376e21363ad676beeaa587c021105982f2aaf81aa9da4c0594a135ef648725 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 10 23:19:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct 10 23:19:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/302453000' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 10 23:19:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/302453000' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 10 23:19:08 np0005480824 ceph-mgr[74617]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 10 23:19:08 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.pdyrua(active, since 8s)
Oct 10 23:19:08 np0005480824 systemd[1]: libpod-64376e21363ad676beeaa587c021105982f2aaf81aa9da4c0594a135ef648725.scope: Deactivated successfully.
Oct 10 23:19:08 np0005480824 podman[75235]: 2025-10-11 03:19:08.43610661 +0000 UTC m=+1.557792056 container died 64376e21363ad676beeaa587c021105982f2aaf81aa9da4c0594a135ef648725 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 23:19:08 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: ignoring --setuser ceph since I am not root
Oct 10 23:19:08 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: ignoring --setgroup ceph since I am not root
Oct 10 23:19:08 np0005480824 ceph-mgr[74617]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 10 23:19:08 np0005480824 ceph-mgr[74617]: pidfile_write: ignore empty --pid-file
Oct 10 23:19:08 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/302453000' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 10 23:19:08 np0005480824 systemd[1]: var-lib-containers-storage-overlay-8e35da09abe5b7bfb0a7e9b804844bf30e0f6a5c66e3cf7881ae3181bda8a161-merged.mount: Deactivated successfully.
Oct 10 23:19:08 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'alerts'
Oct 10 23:19:08 np0005480824 podman[75235]: 2025-10-11 03:19:08.825366378 +0000 UTC m=+1.947051824 container remove 64376e21363ad676beeaa587c021105982f2aaf81aa9da4c0594a135ef648725 (image=quay.io/ceph/ceph:v18, name=exciting_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:19:08 np0005480824 systemd[1]: libpod-conmon-64376e21363ad676beeaa587c021105982f2aaf81aa9da4c0594a135ef648725.scope: Deactivated successfully.
Oct 10 23:19:08 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:08.947+0000 7fb82311d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 23:19:08 np0005480824 ceph-mgr[74617]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 23:19:08 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'balancer'
Oct 10 23:19:08 np0005480824 podman[75313]: 2025-10-11 03:19:08.959466513 +0000 UTC m=+0.110607703 container create 7ccf28c96cd99df88447a804fe9c0ead4b5b0ad9f4f752350d74389d4f95e0a3 (image=quay.io/ceph/ceph:v18, name=eloquent_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 10 23:19:08 np0005480824 podman[75313]: 2025-10-11 03:19:08.875852786 +0000 UTC m=+0.026994016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:09 np0005480824 systemd[1]: Started libpod-conmon-7ccf28c96cd99df88447a804fe9c0ead4b5b0ad9f4f752350d74389d4f95e0a3.scope.
Oct 10 23:19:09 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3384ae485ab9849af941b410538bbf53716903d3da31e4f01f5c9713d1267b66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3384ae485ab9849af941b410538bbf53716903d3da31e4f01f5c9713d1267b66/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3384ae485ab9849af941b410538bbf53716903d3da31e4f01f5c9713d1267b66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:09 np0005480824 podman[75313]: 2025-10-11 03:19:09.130111167 +0000 UTC m=+0.281252357 container init 7ccf28c96cd99df88447a804fe9c0ead4b5b0ad9f4f752350d74389d4f95e0a3 (image=quay.io/ceph/ceph:v18, name=eloquent_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:19:09 np0005480824 podman[75313]: 2025-10-11 03:19:09.173664231 +0000 UTC m=+0.324805411 container start 7ccf28c96cd99df88447a804fe9c0ead4b5b0ad9f4f752350d74389d4f95e0a3 (image=quay.io/ceph/ceph:v18, name=eloquent_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 23:19:09 np0005480824 podman[75313]: 2025-10-11 03:19:09.197434171 +0000 UTC m=+0.348575341 container attach 7ccf28c96cd99df88447a804fe9c0ead4b5b0ad9f4f752350d74389d4f95e0a3 (image=quay.io/ceph/ceph:v18, name=eloquent_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 23:19:09 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:09.200+0000 7fb82311d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 23:19:09 np0005480824 ceph-mgr[74617]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 23:19:09 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'cephadm'
Oct 10 23:19:09 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/302453000' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 10 23:19:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 10 23:19:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2714735415' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 23:19:09 np0005480824 eloquent_bhabha[75330]: {
Oct 10 23:19:09 np0005480824 eloquent_bhabha[75330]:    "epoch": 4,
Oct 10 23:19:09 np0005480824 eloquent_bhabha[75330]:    "available": true,
Oct 10 23:19:09 np0005480824 eloquent_bhabha[75330]:    "active_name": "compute-0.pdyrua",
Oct 10 23:19:09 np0005480824 eloquent_bhabha[75330]:    "num_standby": 0
Oct 10 23:19:09 np0005480824 eloquent_bhabha[75330]: }
Oct 10 23:19:09 np0005480824 systemd[1]: libpod-7ccf28c96cd99df88447a804fe9c0ead4b5b0ad9f4f752350d74389d4f95e0a3.scope: Deactivated successfully.
Oct 10 23:19:09 np0005480824 podman[75313]: 2025-10-11 03:19:09.765002336 +0000 UTC m=+0.916143476 container died 7ccf28c96cd99df88447a804fe9c0ead4b5b0ad9f4f752350d74389d4f95e0a3 (image=quay.io/ceph/ceph:v18, name=eloquent_bhabha, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:09 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3384ae485ab9849af941b410538bbf53716903d3da31e4f01f5c9713d1267b66-merged.mount: Deactivated successfully.
Oct 10 23:19:10 np0005480824 podman[75313]: 2025-10-11 03:19:10.033635417 +0000 UTC m=+1.184776607 container remove 7ccf28c96cd99df88447a804fe9c0ead4b5b0ad9f4f752350d74389d4f95e0a3 (image=quay.io/ceph/ceph:v18, name=eloquent_bhabha, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 10 23:19:10 np0005480824 systemd[1]: libpod-conmon-7ccf28c96cd99df88447a804fe9c0ead4b5b0ad9f4f752350d74389d4f95e0a3.scope: Deactivated successfully.
Oct 10 23:19:10 np0005480824 podman[75368]: 2025-10-11 03:19:10.177020348 +0000 UTC m=+0.109368651 container create 680397924ddb80c6601c9cfa753251f81fe93482f4faf472456307ef26dad5d4 (image=quay.io/ceph/ceph:v18, name=silly_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:19:10 np0005480824 podman[75368]: 2025-10-11 03:19:10.107998538 +0000 UTC m=+0.040346921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:10 np0005480824 systemd[1]: Started libpod-conmon-680397924ddb80c6601c9cfa753251f81fe93482f4faf472456307ef26dad5d4.scope.
Oct 10 23:19:10 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:10 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f706910f3ade129e2a4d8caaa6589662dd985dddaf8258a0e78922102c3e9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:10 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f706910f3ade129e2a4d8caaa6589662dd985dddaf8258a0e78922102c3e9a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:10 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f706910f3ade129e2a4d8caaa6589662dd985dddaf8258a0e78922102c3e9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:10 np0005480824 podman[75368]: 2025-10-11 03:19:10.396001823 +0000 UTC m=+0.328350146 container init 680397924ddb80c6601c9cfa753251f81fe93482f4faf472456307ef26dad5d4 (image=quay.io/ceph/ceph:v18, name=silly_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 23:19:10 np0005480824 podman[75368]: 2025-10-11 03:19:10.402910896 +0000 UTC m=+0.335259209 container start 680397924ddb80c6601c9cfa753251f81fe93482f4faf472456307ef26dad5d4 (image=quay.io/ceph/ceph:v18, name=silly_villani, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 10 23:19:10 np0005480824 podman[75368]: 2025-10-11 03:19:10.426403888 +0000 UTC m=+0.358752181 container attach 680397924ddb80c6601c9cfa753251f81fe93482f4faf472456307ef26dad5d4 (image=quay.io/ceph/ceph:v18, name=silly_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 23:19:11 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'crash'
Oct 10 23:19:11 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:11.363+0000 7fb82311d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 23:19:11 np0005480824 ceph-mgr[74617]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 23:19:11 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'dashboard'
Oct 10 23:19:12 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'devicehealth'
Oct 10 23:19:13 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:13.021+0000 7fb82311d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 23:19:13 np0005480824 ceph-mgr[74617]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 23:19:13 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 23:19:13 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 23:19:13 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 23:19:13 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]:  from numpy import show_config as show_numpy_config
Oct 10 23:19:13 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:13.585+0000 7fb82311d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 23:19:13 np0005480824 ceph-mgr[74617]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 23:19:13 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'influx'
Oct 10 23:19:13 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:13.818+0000 7fb82311d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 23:19:13 np0005480824 ceph-mgr[74617]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 23:19:13 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'insights'
Oct 10 23:19:14 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'iostat'
Oct 10 23:19:14 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:14.280+0000 7fb82311d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 23:19:14 np0005480824 ceph-mgr[74617]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 23:19:14 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'k8sevents'
Oct 10 23:19:15 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'localpool'
Oct 10 23:19:16 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 23:19:16 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'mirroring'
Oct 10 23:19:17 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'nfs'
Oct 10 23:19:17 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:17.847+0000 7fb82311d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 23:19:17 np0005480824 ceph-mgr[74617]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 23:19:17 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'orchestrator'
Oct 10 23:19:18 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:18.539+0000 7fb82311d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 23:19:18 np0005480824 ceph-mgr[74617]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 23:19:18 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 23:19:18 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:18.808+0000 7fb82311d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 23:19:18 np0005480824 ceph-mgr[74617]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 23:19:18 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'osd_support'
Oct 10 23:19:19 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:19.044+0000 7fb82311d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 23:19:19 np0005480824 ceph-mgr[74617]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 23:19:19 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 23:19:19 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:19.306+0000 7fb82311d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 23:19:19 np0005480824 ceph-mgr[74617]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 23:19:19 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'progress'
Oct 10 23:19:19 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:19.551+0000 7fb82311d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 23:19:19 np0005480824 ceph-mgr[74617]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 23:19:19 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'prometheus'
Oct 10 23:19:20 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:20.535+0000 7fb82311d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 23:19:20 np0005480824 ceph-mgr[74617]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 23:19:20 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'rbd_support'
Oct 10 23:19:20 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:20.836+0000 7fb82311d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 23:19:20 np0005480824 ceph-mgr[74617]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 23:19:20 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'restful'
Oct 10 23:19:21 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'rgw'
Oct 10 23:19:22 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:22.242+0000 7fb82311d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 23:19:22 np0005480824 ceph-mgr[74617]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 23:19:22 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'rook'
Oct 10 23:19:24 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:24.227+0000 7fb82311d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 23:19:24 np0005480824 ceph-mgr[74617]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 23:19:24 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'selftest'
Oct 10 23:19:24 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:24.457+0000 7fb82311d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 23:19:24 np0005480824 ceph-mgr[74617]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 23:19:24 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'snap_schedule'
Oct 10 23:19:24 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:24.699+0000 7fb82311d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 23:19:24 np0005480824 ceph-mgr[74617]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 23:19:24 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'stats'
Oct 10 23:19:24 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'status'
Oct 10 23:19:25 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:25.204+0000 7fb82311d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 23:19:25 np0005480824 ceph-mgr[74617]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 23:19:25 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'telegraf'
Oct 10 23:19:25 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:25.444+0000 7fb82311d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 23:19:25 np0005480824 ceph-mgr[74617]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 23:19:25 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'telemetry'
Oct 10 23:19:26 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:26.065+0000 7fb82311d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 23:19:26 np0005480824 ceph-mgr[74617]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 23:19:26 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 23:19:26 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:26.766+0000 7fb82311d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 23:19:26 np0005480824 ceph-mgr[74617]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 23:19:26 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'volumes'
Oct 10 23:19:27 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:27.509+0000 7fb82311d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr[py] Loading python module 'zabbix'
Oct 10 23:19:27 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T03:19:27.752+0000 7fb82311d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : Active manager daemon compute-0.pdyrua restarted
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: ms_deliver_dispatch: unhandled message 0x56167c8b11e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.pdyrua
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr handle_mgr_map Activating!
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr handle_mgr_map I am now activating
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.pdyrua(active, starting, since 0.0186454s)
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.pdyrua", "id": "compute-0.pdyrua"} v 0) v1
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "mgr metadata", "who": "compute-0.pdyrua", "id": "compute-0.pdyrua"}]: dispatch
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).mds e1 all = 1
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: balancer
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : Manager daemon compute-0.pdyrua is now available
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Starting
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:19:27
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] No pools available
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: Active manager daemon compute-0.pdyrua restarted
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: Activating manager daemon compute-0.pdyrua
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: Manager daemon compute-0.pdyrua is now available
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: cephadm
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: crash
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: devicehealth
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [devicehealth INFO root] Starting
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: iostat
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: nfs
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: orchestrator
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: pg_autoscaler
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: progress
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [progress INFO root] Loading...
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [progress INFO root] No stored events to load
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [progress INFO root] Loaded [] historic events
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [progress INFO root] Loaded OSDMap, ready.
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] recovery thread starting
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] starting setup
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: rbd_support
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: restful
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [restful INFO root] server_addr: :: server_port: 8003
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: status
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [restful WARNING root] server not running: no certificate configured
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdyrua/mirror_snapshot_schedule"} v 0) v1
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdyrua/mirror_snapshot_schedule"}]: dispatch
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: telemetry
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] PerfHandler: starting
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TaskHandler: starting
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdyrua/trash_purge_schedule"} v 0) v1
Oct 10 23:19:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdyrua/trash_purge_schedule"}]: dispatch
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] setup complete
Oct 10 23:19:27 np0005480824 ceph-mgr[74617]: mgr load Constructed class from module: volumes
Oct 10 23:19:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct 10 23:19:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct 10 23:19:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019926865 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:19:28 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.pdyrua(active, since 1.02854s)
Oct 10 23:19:28 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 10 23:19:28 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 10 23:19:28 np0005480824 silly_villani[75384]: {
Oct 10 23:19:28 np0005480824 silly_villani[75384]:    "mgrmap_epoch": 6,
Oct 10 23:19:28 np0005480824 silly_villani[75384]:    "initialized": true
Oct 10 23:19:28 np0005480824 silly_villani[75384]: }
Oct 10 23:19:28 np0005480824 ceph-mon[74326]: Found migration_current of "None". Setting to last migration.
Oct 10 23:19:28 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:28 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:28 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdyrua/mirror_snapshot_schedule"}]: dispatch
Oct 10 23:19:28 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdyrua/trash_purge_schedule"}]: dispatch
Oct 10 23:19:28 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:28 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:28 np0005480824 systemd[1]: libpod-680397924ddb80c6601c9cfa753251f81fe93482f4faf472456307ef26dad5d4.scope: Deactivated successfully.
Oct 10 23:19:28 np0005480824 podman[75533]: 2025-10-11 03:19:28.879440894 +0000 UTC m=+0.029083953 container died 680397924ddb80c6601c9cfa753251f81fe93482f4faf472456307ef26dad5d4 (image=quay.io/ceph/ceph:v18, name=silly_villani, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:19:28 np0005480824 systemd[1]: var-lib-containers-storage-overlay-71f706910f3ade129e2a4d8caaa6589662dd985dddaf8258a0e78922102c3e9a-merged.mount: Deactivated successfully.
Oct 10 23:19:28 np0005480824 podman[75533]: 2025-10-11 03:19:28.92457486 +0000 UTC m=+0.074217899 container remove 680397924ddb80c6601c9cfa753251f81fe93482f4faf472456307ef26dad5d4 (image=quay.io/ceph/ceph:v18, name=silly_villani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:19:28 np0005480824 systemd[1]: libpod-conmon-680397924ddb80c6601c9cfa753251f81fe93482f4faf472456307ef26dad5d4.scope: Deactivated successfully.
Oct 10 23:19:29 np0005480824 podman[75548]: 2025-10-11 03:19:29.020460652 +0000 UTC m=+0.061455850 container create cd16c9d6bf8a9827f52d3c9618c4809b1de62b28dcb902fea1790c9659ec9f6b (image=quay.io/ceph/ceph:v18, name=pedantic_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:19:29 np0005480824 systemd[1]: Started libpod-conmon-cd16c9d6bf8a9827f52d3c9618c4809b1de62b28dcb902fea1790c9659ec9f6b.scope.
Oct 10 23:19:29 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:29 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5af6752c9a9788a7317a42b69bbb896a8e2d786a29dd6e7690e3a15df68a51/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:29 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5af6752c9a9788a7317a42b69bbb896a8e2d786a29dd6e7690e3a15df68a51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:29 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5af6752c9a9788a7317a42b69bbb896a8e2d786a29dd6e7690e3a15df68a51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:29 np0005480824 podman[75548]: 2025-10-11 03:19:28.991634607 +0000 UTC m=+0.032629865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:29 np0005480824 podman[75548]: 2025-10-11 03:19:29.093288753 +0000 UTC m=+0.134283991 container init cd16c9d6bf8a9827f52d3c9618c4809b1de62b28dcb902fea1790c9659ec9f6b (image=quay.io/ceph/ceph:v18, name=pedantic_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:19:29 np0005480824 podman[75548]: 2025-10-11 03:19:29.100636027 +0000 UTC m=+0.141631235 container start cd16c9d6bf8a9827f52d3c9618c4809b1de62b28dcb902fea1790c9659ec9f6b (image=quay.io/ceph/ceph:v18, name=pedantic_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 10 23:19:29 np0005480824 podman[75548]: 2025-10-11 03:19:29.103848092 +0000 UTC m=+0.144843300 container attach cd16c9d6bf8a9827f52d3c9618c4809b1de62b28dcb902fea1790c9659ec9f6b (image=quay.io/ceph/ceph:v18, name=pedantic_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:19:29 np0005480824 ceph-mgr[74617]: [cephadm INFO cherrypy.error] [11/Oct/2025:03:19:29] ENGINE Bus STARTING
Oct 10 23:19:29 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : [11/Oct/2025:03:19:29] ENGINE Bus STARTING
Oct 10 23:19:29 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:19:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct 10 23:19:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 10 23:19:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 23:19:29 np0005480824 systemd[1]: libpod-cd16c9d6bf8a9827f52d3c9618c4809b1de62b28dcb902fea1790c9659ec9f6b.scope: Deactivated successfully.
Oct 10 23:19:29 np0005480824 podman[75548]: 2025-10-11 03:19:29.670961595 +0000 UTC m=+0.711956803 container died cd16c9d6bf8a9827f52d3c9618c4809b1de62b28dcb902fea1790c9659ec9f6b (image=quay.io/ceph/ceph:v18, name=pedantic_davinci, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:29 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ad5af6752c9a9788a7317a42b69bbb896a8e2d786a29dd6e7690e3a15df68a51-merged.mount: Deactivated successfully.
Oct 10 23:19:29 np0005480824 ceph-mgr[74617]: [cephadm INFO cherrypy.error] [11/Oct/2025:03:19:29] ENGINE Serving on http://192.168.122.100:8765
Oct 10 23:19:29 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : [11/Oct/2025:03:19:29] ENGINE Serving on http://192.168.122.100:8765
Oct 10 23:19:29 np0005480824 podman[75548]: 2025-10-11 03:19:29.711951311 +0000 UTC m=+0.752946519 container remove cd16c9d6bf8a9827f52d3c9618c4809b1de62b28dcb902fea1790c9659ec9f6b (image=quay.io/ceph/ceph:v18, name=pedantic_davinci, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:19:29 np0005480824 systemd[1]: libpod-conmon-cd16c9d6bf8a9827f52d3c9618c4809b1de62b28dcb902fea1790c9659ec9f6b.scope: Deactivated successfully.
Oct 10 23:19:29 np0005480824 ceph-mgr[74617]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 23:19:29 np0005480824 podman[75615]: 2025-10-11 03:19:29.781385702 +0000 UTC m=+0.044218423 container create 37069e8c662b6bc92d87a0bce1eb3acbbc651f2aab00cfdc23345cc215229b8a (image=quay.io/ceph/ceph:v18, name=optimistic_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:29 np0005480824 systemd[1]: Started libpod-conmon-37069e8c662b6bc92d87a0bce1eb3acbbc651f2aab00cfdc23345cc215229b8a.scope.
Oct 10 23:19:29 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:29 np0005480824 ceph-mgr[74617]: [cephadm INFO cherrypy.error] [11/Oct/2025:03:19:29] ENGINE Serving on https://192.168.122.100:7150
Oct 10 23:19:29 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : [11/Oct/2025:03:19:29] ENGINE Serving on https://192.168.122.100:7150
Oct 10 23:19:29 np0005480824 ceph-mgr[74617]: [cephadm INFO cherrypy.error] [11/Oct/2025:03:19:29] ENGINE Bus STARTED
Oct 10 23:19:29 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : [11/Oct/2025:03:19:29] ENGINE Bus STARTED
Oct 10 23:19:29 np0005480824 ceph-mgr[74617]: [cephadm INFO cherrypy.error] [11/Oct/2025:03:19:29] ENGINE Client ('192.168.122.100', 58442) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 23:19:29 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : [11/Oct/2025:03:19:29] ENGINE Client ('192.168.122.100', 58442) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 23:19:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 10 23:19:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 23:19:29 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:29 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17c9c91e2ad3d14a72983c3d2516ab29ed96e19573942c5ef4b39036cc24d20/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:29 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17c9c91e2ad3d14a72983c3d2516ab29ed96e19573942c5ef4b39036cc24d20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:29 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17c9c91e2ad3d14a72983c3d2516ab29ed96e19573942c5ef4b39036cc24d20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:29 np0005480824 podman[75615]: 2025-10-11 03:19:29.763693383 +0000 UTC m=+0.026526134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:29 np0005480824 podman[75615]: 2025-10-11 03:19:29.869134309 +0000 UTC m=+0.131967070 container init 37069e8c662b6bc92d87a0bce1eb3acbbc651f2aab00cfdc23345cc215229b8a (image=quay.io/ceph/ceph:v18, name=optimistic_ptolemy, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 10 23:19:29 np0005480824 podman[75615]: 2025-10-11 03:19:29.876491933 +0000 UTC m=+0.139324654 container start 37069e8c662b6bc92d87a0bce1eb3acbbc651f2aab00cfdc23345cc215229b8a (image=quay.io/ceph/ceph:v18, name=optimistic_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 23:19:29 np0005480824 podman[75615]: 2025-10-11 03:19:29.879847872 +0000 UTC m=+0.142680603 container attach 37069e8c662b6bc92d87a0bce1eb3acbbc651f2aab00cfdc23345cc215229b8a (image=quay.io/ceph/ceph:v18, name=optimistic_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 10 23:19:30 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:30 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Set ssh ssh_user
Oct 10 23:19:30 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:30 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Set ssh ssh_config
Oct 10 23:19:30 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct 10 23:19:30 np0005480824 ceph-mgr[74617]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct 10 23:19:30 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct 10 23:19:30 np0005480824 optimistic_ptolemy[75642]: ssh user set to ceph-admin. sudo will be used
Oct 10 23:19:30 np0005480824 systemd[1]: libpod-37069e8c662b6bc92d87a0bce1eb3acbbc651f2aab00cfdc23345cc215229b8a.scope: Deactivated successfully.
Oct 10 23:19:30 np0005480824 conmon[75642]: conmon 37069e8c662b6bc92d87 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37069e8c662b6bc92d87a0bce1eb3acbbc651f2aab00cfdc23345cc215229b8a.scope/container/memory.events
Oct 10 23:19:30 np0005480824 podman[75615]: 2025-10-11 03:19:30.41470054 +0000 UTC m=+0.677533271 container died 37069e8c662b6bc92d87a0bce1eb3acbbc651f2aab00cfdc23345cc215229b8a (image=quay.io/ceph/ceph:v18, name=optimistic_ptolemy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:19:30 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d17c9c91e2ad3d14a72983c3d2516ab29ed96e19573942c5ef4b39036cc24d20-merged.mount: Deactivated successfully.
Oct 10 23:19:30 np0005480824 podman[75615]: 2025-10-11 03:19:30.463658598 +0000 UTC m=+0.726491339 container remove 37069e8c662b6bc92d87a0bce1eb3acbbc651f2aab00cfdc23345cc215229b8a (image=quay.io/ceph/ceph:v18, name=optimistic_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:19:30 np0005480824 systemd[1]: libpod-conmon-37069e8c662b6bc92d87a0bce1eb3acbbc651f2aab00cfdc23345cc215229b8a.scope: Deactivated successfully.
Oct 10 23:19:30 np0005480824 podman[75678]: 2025-10-11 03:19:30.543426612 +0000 UTC m=+0.051908177 container create 45759f35c84b8063b2a8edb7383b3ecdb316d3c06f301359a7b525e0611ef9b8 (image=quay.io/ceph/ceph:v18, name=lucid_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:19:30 np0005480824 systemd[1]: Started libpod-conmon-45759f35c84b8063b2a8edb7383b3ecdb316d3c06f301359a7b525e0611ef9b8.scope.
Oct 10 23:19:30 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2720d04ed4af9fe9cb0a41dcf12605672aaa781b72323d9e121ca580362ad6b8/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2720d04ed4af9fe9cb0a41dcf12605672aaa781b72323d9e121ca580362ad6b8/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2720d04ed4af9fe9cb0a41dcf12605672aaa781b72323d9e121ca580362ad6b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2720d04ed4af9fe9cb0a41dcf12605672aaa781b72323d9e121ca580362ad6b8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2720d04ed4af9fe9cb0a41dcf12605672aaa781b72323d9e121ca580362ad6b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:30 np0005480824 podman[75678]: 2025-10-11 03:19:30.614341662 +0000 UTC m=+0.122823267 container init 45759f35c84b8063b2a8edb7383b3ecdb316d3c06f301359a7b525e0611ef9b8 (image=quay.io/ceph/ceph:v18, name=lucid_bassi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 10 23:19:30 np0005480824 podman[75678]: 2025-10-11 03:19:30.61878324 +0000 UTC m=+0.127264805 container start 45759f35c84b8063b2a8edb7383b3ecdb316d3c06f301359a7b525e0611ef9b8 (image=quay.io/ceph/ceph:v18, name=lucid_bassi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:19:30 np0005480824 podman[75678]: 2025-10-11 03:19:30.62179589 +0000 UTC m=+0.130277535 container attach 45759f35c84b8063b2a8edb7383b3ecdb316d3c06f301359a7b525e0611ef9b8 (image=quay.io/ceph/ceph:v18, name=lucid_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:30 np0005480824 podman[75678]: 2025-10-11 03:19:30.528297401 +0000 UTC m=+0.036778996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.pdyrua(active, since 2s)
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: [11/Oct/2025:03:19:29] ENGINE Bus STARTING
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: [11/Oct/2025:03:19:29] ENGINE Serving on http://192.168.122.100:8765
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: [11/Oct/2025:03:19:29] ENGINE Serving on https://192.168.122.100:7150
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: [11/Oct/2025:03:19:29] ENGINE Bus STARTED
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: [11/Oct/2025:03:19:29] ENGINE Client ('192.168.122.100', 58442) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: Set ssh ssh_user
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: Set ssh ssh_config
Oct 10 23:19:30 np0005480824 ceph-mon[74326]: ssh user set to ceph-admin. sudo will be used
Oct 10 23:19:31 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:19:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct 10 23:19:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:31 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Set ssh ssh_identity_key
Oct 10 23:19:31 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct 10 23:19:31 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Set ssh private key
Oct 10 23:19:31 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Set ssh private key
Oct 10 23:19:31 np0005480824 systemd[1]: libpod-45759f35c84b8063b2a8edb7383b3ecdb316d3c06f301359a7b525e0611ef9b8.scope: Deactivated successfully.
Oct 10 23:19:31 np0005480824 conmon[75694]: conmon 45759f35c84b8063b2a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45759f35c84b8063b2a8edb7383b3ecdb316d3c06f301359a7b525e0611ef9b8.scope/container/memory.events
Oct 10 23:19:31 np0005480824 podman[75678]: 2025-10-11 03:19:31.197445919 +0000 UTC m=+0.705927494 container died 45759f35c84b8063b2a8edb7383b3ecdb316d3c06f301359a7b525e0611ef9b8 (image=quay.io/ceph/ceph:v18, name=lucid_bassi, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 23:19:31 np0005480824 systemd[1]: var-lib-containers-storage-overlay-2720d04ed4af9fe9cb0a41dcf12605672aaa781b72323d9e121ca580362ad6b8-merged.mount: Deactivated successfully.
Oct 10 23:19:31 np0005480824 podman[75678]: 2025-10-11 03:19:31.246681584 +0000 UTC m=+0.755163149 container remove 45759f35c84b8063b2a8edb7383b3ecdb316d3c06f301359a7b525e0611ef9b8 (image=quay.io/ceph/ceph:v18, name=lucid_bassi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:19:31 np0005480824 systemd[1]: libpod-conmon-45759f35c84b8063b2a8edb7383b3ecdb316d3c06f301359a7b525e0611ef9b8.scope: Deactivated successfully.
Oct 10 23:19:31 np0005480824 podman[75731]: 2025-10-11 03:19:31.310982418 +0000 UTC m=+0.046966675 container create c97ae45583959a74586958680d0ad97cd3e9f734ef84bf8c47754a1d35e057f4 (image=quay.io/ceph/ceph:v18, name=gifted_ramanujan, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:19:31 np0005480824 systemd[1]: Started libpod-conmon-c97ae45583959a74586958680d0ad97cd3e9f734ef84bf8c47754a1d35e057f4.scope.
Oct 10 23:19:31 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:31 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f3ac840f85e2f05f33173c217aa484a9a80265eec6404ee2325505b29a46cf/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:31 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f3ac840f85e2f05f33173c217aa484a9a80265eec6404ee2325505b29a46cf/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:31 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f3ac840f85e2f05f33173c217aa484a9a80265eec6404ee2325505b29a46cf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:31 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f3ac840f85e2f05f33173c217aa484a9a80265eec6404ee2325505b29a46cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:31 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f3ac840f85e2f05f33173c217aa484a9a80265eec6404ee2325505b29a46cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:31 np0005480824 podman[75731]: 2025-10-11 03:19:31.2849852 +0000 UTC m=+0.020969447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:31 np0005480824 podman[75731]: 2025-10-11 03:19:31.38949852 +0000 UTC m=+0.125482757 container init c97ae45583959a74586958680d0ad97cd3e9f734ef84bf8c47754a1d35e057f4 (image=quay.io/ceph/ceph:v18, name=gifted_ramanujan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 23:19:31 np0005480824 podman[75731]: 2025-10-11 03:19:31.394363109 +0000 UTC m=+0.130347326 container start c97ae45583959a74586958680d0ad97cd3e9f734ef84bf8c47754a1d35e057f4 (image=quay.io/ceph/ceph:v18, name=gifted_ramanujan, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 23:19:31 np0005480824 podman[75731]: 2025-10-11 03:19:31.397699217 +0000 UTC m=+0.133683464 container attach c97ae45583959a74586958680d0ad97cd3e9f734ef84bf8c47754a1d35e057f4 (image=quay.io/ceph/ceph:v18, name=gifted_ramanujan, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:31 np0005480824 ceph-mgr[74617]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 23:19:31 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:19:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct 10 23:19:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:31 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct 10 23:19:31 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct 10 23:19:31 np0005480824 systemd[1]: libpod-c97ae45583959a74586958680d0ad97cd3e9f734ef84bf8c47754a1d35e057f4.scope: Deactivated successfully.
Oct 10 23:19:31 np0005480824 conmon[75747]: conmon c97ae45583959a745869 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c97ae45583959a74586958680d0ad97cd3e9f734ef84bf8c47754a1d35e057f4.scope/container/memory.events
Oct 10 23:19:31 np0005480824 podman[75731]: 2025-10-11 03:19:31.910963263 +0000 UTC m=+0.646947520 container died c97ae45583959a74586958680d0ad97cd3e9f734ef84bf8c47754a1d35e057f4 (image=quay.io/ceph/ceph:v18, name=gifted_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 10 23:19:31 np0005480824 systemd[1]: var-lib-containers-storage-overlay-71f3ac840f85e2f05f33173c217aa484a9a80265eec6404ee2325505b29a46cf-merged.mount: Deactivated successfully.
Oct 10 23:19:31 np0005480824 podman[75731]: 2025-10-11 03:19:31.960513787 +0000 UTC m=+0.696498014 container remove c97ae45583959a74586958680d0ad97cd3e9f734ef84bf8c47754a1d35e057f4 (image=quay.io/ceph/ceph:v18, name=gifted_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:19:31 np0005480824 systemd[1]: libpod-conmon-c97ae45583959a74586958680d0ad97cd3e9f734ef84bf8c47754a1d35e057f4.scope: Deactivated successfully.
Oct 10 23:19:32 np0005480824 podman[75787]: 2025-10-11 03:19:32.014827047 +0000 UTC m=+0.036037347 container create 8ff1b53364343f188201dd5d3eb28c1da387cd4937ae879456eda55e69c9f8ab (image=quay.io/ceph/ceph:v18, name=amazing_khayyam, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 23:19:32 np0005480824 systemd[1]: Started libpod-conmon-8ff1b53364343f188201dd5d3eb28c1da387cd4937ae879456eda55e69c9f8ab.scope.
Oct 10 23:19:32 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d08d31ed4138d8126c020fd002c8e18061c6e889b3b32da19062ce376c5f593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d08d31ed4138d8126c020fd002c8e18061c6e889b3b32da19062ce376c5f593/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d08d31ed4138d8126c020fd002c8e18061c6e889b3b32da19062ce376c5f593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:32 np0005480824 podman[75787]: 2025-10-11 03:19:32.08625337 +0000 UTC m=+0.107463680 container init 8ff1b53364343f188201dd5d3eb28c1da387cd4937ae879456eda55e69c9f8ab (image=quay.io/ceph/ceph:v18, name=amazing_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:19:32 np0005480824 podman[75787]: 2025-10-11 03:19:32.090876742 +0000 UTC m=+0.112087032 container start 8ff1b53364343f188201dd5d3eb28c1da387cd4937ae879456eda55e69c9f8ab (image=quay.io/ceph/ceph:v18, name=amazing_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 23:19:32 np0005480824 podman[75787]: 2025-10-11 03:19:32.093836742 +0000 UTC m=+0.115047042 container attach 8ff1b53364343f188201dd5d3eb28c1da387cd4937ae879456eda55e69c9f8ab (image=quay.io/ceph/ceph:v18, name=amazing_khayyam, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 10 23:19:32 np0005480824 podman[75787]: 2025-10-11 03:19:31.99685104 +0000 UTC m=+0.018061360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:32 np0005480824 ceph-mon[74326]: Set ssh ssh_identity_key
Oct 10 23:19:32 np0005480824 ceph-mon[74326]: Set ssh private key
Oct 10 23:19:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:32 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:19:32 np0005480824 amazing_khayyam[75803]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7hAhbwtZbw5ZBjOR2BI7Wf9+BkyiKrTrODCG4YFDfKtvX/3xLDRWmF8be5bprrURpUWKeOr7g6lKU2COM3LspMxIC5amlASUMXfjb28P2Ak/RbNsMU0ulKsIzJHcR1k7i88dS6ypTJhXei4y/Dh2ZvHX5xw+ImlM8BgHu65Oapms2P5UDb9UPGwYhKuMsC6Wc1KSNEwIxu05VfZfDq3X3hRoQWPojSJEvBRM3ZP5/iSzOYl6YH8BNulskYXiF7cSMU62aytra93nboTEcnRTP+0AwI6cUEbAJkuk8F6efDz60nOMBbsDqFThqWAQmRCOmnvFswLf9OWeZwr7bI1VDIJLCiRap8GDS0Jqwjj1ogYlZuth85M3EZWTi2sYfDFuqSVx8Q57AugmJWqg7zx5Qr4Cm5dERQmBbgPSa840Q5B8WVgLNp6P2m871ZiCdBT0KcnoTlzh1YCgiJj3CcskT4gaHqYCsA/Z0PwVg3xFsDz5toaJz49DvN4xE4Wr4wkE= zuul@controller
Oct 10 23:19:32 np0005480824 systemd[1]: libpod-8ff1b53364343f188201dd5d3eb28c1da387cd4937ae879456eda55e69c9f8ab.scope: Deactivated successfully.
Oct 10 23:19:32 np0005480824 podman[75829]: 2025-10-11 03:19:32.717884843 +0000 UTC m=+0.034972788 container died 8ff1b53364343f188201dd5d3eb28c1da387cd4937ae879456eda55e69c9f8ab (image=quay.io/ceph/ceph:v18, name=amazing_khayyam, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:19:32 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6d08d31ed4138d8126c020fd002c8e18061c6e889b3b32da19062ce376c5f593-merged.mount: Deactivated successfully.
Oct 10 23:19:32 np0005480824 podman[75829]: 2025-10-11 03:19:32.749092581 +0000 UTC m=+0.066180486 container remove 8ff1b53364343f188201dd5d3eb28c1da387cd4937ae879456eda55e69c9f8ab (image=quay.io/ceph/ceph:v18, name=amazing_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 10 23:19:32 np0005480824 systemd[1]: libpod-conmon-8ff1b53364343f188201dd5d3eb28c1da387cd4937ae879456eda55e69c9f8ab.scope: Deactivated successfully.
Oct 10 23:19:32 np0005480824 podman[75844]: 2025-10-11 03:19:32.817888694 +0000 UTC m=+0.043108943 container create c8c0fd837c460ec20b2018f6ff20370c91ec39de68cf033d562aa4d9860f930c (image=quay.io/ceph/ceph:v18, name=eager_tu, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:19:32 np0005480824 systemd[1]: Started libpod-conmon-c8c0fd837c460ec20b2018f6ff20370c91ec39de68cf033d562aa4d9860f930c.scope.
Oct 10 23:19:32 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe5b2890798653f5774666be882d2d6067d20a6b208e4846c1c4bc05dfe299b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe5b2890798653f5774666be882d2d6067d20a6b208e4846c1c4bc05dfe299b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe5b2890798653f5774666be882d2d6067d20a6b208e4846c1c4bc05dfe299b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:32 np0005480824 podman[75844]: 2025-10-11 03:19:32.80078025 +0000 UTC m=+0.026000529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:32 np0005480824 podman[75844]: 2025-10-11 03:19:32.897842424 +0000 UTC m=+0.123062683 container init c8c0fd837c460ec20b2018f6ff20370c91ec39de68cf033d562aa4d9860f930c (image=quay.io/ceph/ceph:v18, name=eager_tu, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 10 23:19:32 np0005480824 podman[75844]: 2025-10-11 03:19:32.908062844 +0000 UTC m=+0.133283113 container start c8c0fd837c460ec20b2018f6ff20370c91ec39de68cf033d562aa4d9860f930c (image=quay.io/ceph/ceph:v18, name=eager_tu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 23:19:32 np0005480824 podman[75844]: 2025-10-11 03:19:32.911920347 +0000 UTC m=+0.137140626 container attach c8c0fd837c460ec20b2018f6ff20370c91ec39de68cf033d562aa4d9860f930c (image=quay.io/ceph/ceph:v18, name=eager_tu, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:19:33 np0005480824 ceph-mon[74326]: Set ssh ssh_identity_pub
Oct 10 23:19:33 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:19:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053105 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:19:33 np0005480824 systemd[1]: Created slice User Slice of UID 42477.
Oct 10 23:19:33 np0005480824 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 10 23:19:33 np0005480824 systemd-logind[782]: New session 21 of user ceph-admin.
Oct 10 23:19:33 np0005480824 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 10 23:19:33 np0005480824 systemd[1]: Starting User Manager for UID 42477...
Oct 10 23:19:33 np0005480824 ceph-mgr[74617]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 23:19:33 np0005480824 systemd-logind[782]: New session 23 of user ceph-admin.
Oct 10 23:19:33 np0005480824 systemd[75890]: Queued start job for default target Main User Target.
Oct 10 23:19:33 np0005480824 systemd[75890]: Created slice User Application Slice.
Oct 10 23:19:33 np0005480824 systemd[75890]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 10 23:19:33 np0005480824 systemd[75890]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 23:19:33 np0005480824 systemd[75890]: Reached target Paths.
Oct 10 23:19:33 np0005480824 systemd[75890]: Reached target Timers.
Oct 10 23:19:33 np0005480824 systemd[75890]: Starting D-Bus User Message Bus Socket...
Oct 10 23:19:33 np0005480824 systemd[75890]: Starting Create User's Volatile Files and Directories...
Oct 10 23:19:33 np0005480824 systemd[75890]: Listening on D-Bus User Message Bus Socket.
Oct 10 23:19:33 np0005480824 systemd[75890]: Reached target Sockets.
Oct 10 23:19:33 np0005480824 systemd[75890]: Finished Create User's Volatile Files and Directories.
Oct 10 23:19:33 np0005480824 systemd[75890]: Reached target Basic System.
Oct 10 23:19:33 np0005480824 systemd[75890]: Reached target Main User Target.
Oct 10 23:19:33 np0005480824 systemd[75890]: Startup finished in 179ms.
Oct 10 23:19:33 np0005480824 systemd[1]: Started User Manager for UID 42477.
Oct 10 23:19:33 np0005480824 systemd[1]: Started Session 21 of User ceph-admin.
Oct 10 23:19:33 np0005480824 systemd[1]: Started Session 23 of User ceph-admin.
Oct 10 23:19:34 np0005480824 systemd-logind[782]: New session 24 of user ceph-admin.
Oct 10 23:19:34 np0005480824 systemd[1]: Started Session 24 of User ceph-admin.
Oct 10 23:19:34 np0005480824 systemd-logind[782]: New session 25 of user ceph-admin.
Oct 10 23:19:34 np0005480824 systemd[1]: Started Session 25 of User ceph-admin.
Oct 10 23:19:35 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct 10 23:19:35 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct 10 23:19:35 np0005480824 systemd-logind[782]: New session 26 of user ceph-admin.
Oct 10 23:19:35 np0005480824 systemd[1]: Started Session 26 of User ceph-admin.
Oct 10 23:19:35 np0005480824 ceph-mon[74326]: Deploying cephadm binary to compute-0
Oct 10 23:19:35 np0005480824 systemd-logind[782]: New session 27 of user ceph-admin.
Oct 10 23:19:35 np0005480824 systemd[1]: Started Session 27 of User ceph-admin.
Oct 10 23:19:35 np0005480824 ceph-mgr[74617]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 23:19:36 np0005480824 systemd-logind[782]: New session 28 of user ceph-admin.
Oct 10 23:19:36 np0005480824 systemd[1]: Started Session 28 of User ceph-admin.
Oct 10 23:19:36 np0005480824 systemd-logind[782]: New session 29 of user ceph-admin.
Oct 10 23:19:36 np0005480824 systemd[1]: Started Session 29 of User ceph-admin.
Oct 10 23:19:36 np0005480824 systemd-logind[782]: New session 30 of user ceph-admin.
Oct 10 23:19:36 np0005480824 systemd[1]: Started Session 30 of User ceph-admin.
Oct 10 23:19:37 np0005480824 systemd-logind[782]: New session 31 of user ceph-admin.
Oct 10 23:19:37 np0005480824 systemd[1]: Started Session 31 of User ceph-admin.
Oct 10 23:19:37 np0005480824 ceph-mgr[74617]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 23:19:38 np0005480824 systemd-logind[782]: New session 32 of user ceph-admin.
Oct 10 23:19:38 np0005480824 systemd[1]: Started Session 32 of User ceph-admin.
Oct 10 23:19:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:19:38 np0005480824 systemd-logind[782]: New session 33 of user ceph-admin.
Oct 10 23:19:38 np0005480824 systemd[1]: Started Session 33 of User ceph-admin.
Oct 10 23:19:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 10 23:19:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:39 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Added host compute-0
Oct 10 23:19:39 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 10 23:19:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 10 23:19:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 23:19:39 np0005480824 eager_tu[75860]: Added host 'compute-0' with addr '192.168.122.100'
Oct 10 23:19:39 np0005480824 systemd[1]: libpod-c8c0fd837c460ec20b2018f6ff20370c91ec39de68cf033d562aa4d9860f930c.scope: Deactivated successfully.
Oct 10 23:19:39 np0005480824 podman[76511]: 2025-10-11 03:19:39.282341545 +0000 UTC m=+0.034137227 container died c8c0fd837c460ec20b2018f6ff20370c91ec39de68cf033d562aa4d9860f930c (image=quay.io/ceph/ceph:v18, name=eager_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 23:19:39 np0005480824 systemd[1]: var-lib-containers-storage-overlay-8fe5b2890798653f5774666be882d2d6067d20a6b208e4846c1c4bc05dfe299b-merged.mount: Deactivated successfully.
Oct 10 23:19:39 np0005480824 podman[76511]: 2025-10-11 03:19:39.339064128 +0000 UTC m=+0.090859840 container remove c8c0fd837c460ec20b2018f6ff20370c91ec39de68cf033d562aa4d9860f930c (image=quay.io/ceph/ceph:v18, name=eager_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 10 23:19:39 np0005480824 systemd[1]: libpod-conmon-c8c0fd837c460ec20b2018f6ff20370c91ec39de68cf033d562aa4d9860f930c.scope: Deactivated successfully.
Oct 10 23:19:39 np0005480824 podman[76554]: 2025-10-11 03:19:39.455974167 +0000 UTC m=+0.075670237 container create f25f9fb360f5d17e6c33ebb0a8020cc485f329b7a73a4774e3122a4cd63b0fc2 (image=quay.io/ceph/ceph:v18, name=festive_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:39 np0005480824 systemd[1]: Started libpod-conmon-f25f9fb360f5d17e6c33ebb0a8020cc485f329b7a73a4774e3122a4cd63b0fc2.scope.
Oct 10 23:19:39 np0005480824 podman[76554]: 2025-10-11 03:19:39.42893639 +0000 UTC m=+0.048632549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:39 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:39 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25e3940c8002e70644fc2b3c7b6d7e7cf5d9662f046b6732091bcaec00cc25b2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:39 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25e3940c8002e70644fc2b3c7b6d7e7cf5d9662f046b6732091bcaec00cc25b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:39 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25e3940c8002e70644fc2b3c7b6d7e7cf5d9662f046b6732091bcaec00cc25b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:39 np0005480824 podman[76554]: 2025-10-11 03:19:39.572688921 +0000 UTC m=+0.192385010 container init f25f9fb360f5d17e6c33ebb0a8020cc485f329b7a73a4774e3122a4cd63b0fc2 (image=quay.io/ceph/ceph:v18, name=festive_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 10 23:19:39 np0005480824 podman[76554]: 2025-10-11 03:19:39.588234433 +0000 UTC m=+0.207930512 container start f25f9fb360f5d17e6c33ebb0a8020cc485f329b7a73a4774e3122a4cd63b0fc2 (image=quay.io/ceph/ceph:v18, name=festive_edison, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:19:39 np0005480824 podman[76554]: 2025-10-11 03:19:39.599583634 +0000 UTC m=+0.219279733 container attach f25f9fb360f5d17e6c33ebb0a8020cc485f329b7a73a4774e3122a4cd63b0fc2 (image=quay.io/ceph/ceph:v18, name=festive_edison, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:39 np0005480824 ceph-mgr[74617]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 23:19:39 np0005480824 podman[76653]: 2025-10-11 03:19:39.821532928 +0000 UTC m=+0.040663319 container create 452bb881cbc2a6a8df5eb2a6ffdb6722a9eb9a145c0cab85fc3531ab02425df9 (image=quay.io/ceph/ceph:v18, name=gifted_volhard, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:19:39 np0005480824 systemd[1]: Started libpod-conmon-452bb881cbc2a6a8df5eb2a6ffdb6722a9eb9a145c0cab85fc3531ab02425df9.scope.
Oct 10 23:19:39 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:39 np0005480824 podman[76653]: 2025-10-11 03:19:39.887147697 +0000 UTC m=+0.106278138 container init 452bb881cbc2a6a8df5eb2a6ffdb6722a9eb9a145c0cab85fc3531ab02425df9 (image=quay.io/ceph/ceph:v18, name=gifted_volhard, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 23:19:39 np0005480824 podman[76653]: 2025-10-11 03:19:39.896674979 +0000 UTC m=+0.115805340 container start 452bb881cbc2a6a8df5eb2a6ffdb6722a9eb9a145c0cab85fc3531ab02425df9 (image=quay.io/ceph/ceph:v18, name=gifted_volhard, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 10 23:19:39 np0005480824 podman[76653]: 2025-10-11 03:19:39.803279304 +0000 UTC m=+0.022409695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:39 np0005480824 podman[76653]: 2025-10-11 03:19:39.901136328 +0000 UTC m=+0.120266709 container attach 452bb881cbc2a6a8df5eb2a6ffdb6722a9eb9a145c0cab85fc3531ab02425df9 (image=quay.io/ceph/ceph:v18, name=gifted_volhard, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct 10 23:19:40 np0005480824 gifted_volhard[76669]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 10 23:19:40 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:19:40 np0005480824 systemd[1]: libpod-452bb881cbc2a6a8df5eb2a6ffdb6722a9eb9a145c0cab85fc3531ab02425df9.scope: Deactivated successfully.
Oct 10 23:19:40 np0005480824 conmon[76669]: conmon 452bb881cbc2a6a8df5e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-452bb881cbc2a6a8df5eb2a6ffdb6722a9eb9a145c0cab85fc3531ab02425df9.scope/container/memory.events
Oct 10 23:19:40 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct 10 23:19:40 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct 10 23:19:40 np0005480824 podman[76653]: 2025-10-11 03:19:40.202783694 +0000 UTC m=+0.421914055 container died 452bb881cbc2a6a8df5eb2a6ffdb6722a9eb9a145c0cab85fc3531ab02425df9 (image=quay.io/ceph/ceph:v18, name=gifted_volhard, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:19:40 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:40 np0005480824 ceph-mon[74326]: Added host compute-0
Oct 10 23:19:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 10 23:19:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:40 np0005480824 festive_edison[76620]: Scheduled mon update...
Oct 10 23:19:40 np0005480824 systemd[1]: var-lib-containers-storage-overlay-17d288c0b7284c63bca8935211099a086421947bf4d68fa39569b862142c2d9e-merged.mount: Deactivated successfully.
Oct 10 23:19:40 np0005480824 systemd[1]: libpod-f25f9fb360f5d17e6c33ebb0a8020cc485f329b7a73a4774e3122a4cd63b0fc2.scope: Deactivated successfully.
Oct 10 23:19:40 np0005480824 conmon[76620]: conmon f25f9fb360f5d17e6c33 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f25f9fb360f5d17e6c33ebb0a8020cc485f329b7a73a4774e3122a4cd63b0fc2.scope/container/memory.events
Oct 10 23:19:40 np0005480824 podman[76653]: 2025-10-11 03:19:40.253459757 +0000 UTC m=+0.472590118 container remove 452bb881cbc2a6a8df5eb2a6ffdb6722a9eb9a145c0cab85fc3531ab02425df9 (image=quay.io/ceph/ceph:v18, name=gifted_volhard, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:40 np0005480824 podman[76554]: 2025-10-11 03:19:40.257124044 +0000 UTC m=+0.876820113 container died f25f9fb360f5d17e6c33ebb0a8020cc485f329b7a73a4774e3122a4cd63b0fc2 (image=quay.io/ceph/ceph:v18, name=festive_edison, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 10 23:19:40 np0005480824 systemd[1]: libpod-conmon-452bb881cbc2a6a8df5eb2a6ffdb6722a9eb9a145c0cab85fc3531ab02425df9.scope: Deactivated successfully.
Oct 10 23:19:40 np0005480824 systemd[1]: var-lib-containers-storage-overlay-25e3940c8002e70644fc2b3c7b6d7e7cf5d9662f046b6732091bcaec00cc25b2-merged.mount: Deactivated successfully.
Oct 10 23:19:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct 10 23:19:40 np0005480824 podman[76554]: 2025-10-11 03:19:40.315327877 +0000 UTC m=+0.935023946 container remove f25f9fb360f5d17e6c33ebb0a8020cc485f329b7a73a4774e3122a4cd63b0fc2 (image=quay.io/ceph/ceph:v18, name=festive_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 10 23:19:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:40 np0005480824 systemd[1]: libpod-conmon-f25f9fb360f5d17e6c33ebb0a8020cc485f329b7a73a4774e3122a4cd63b0fc2.scope: Deactivated successfully.
Oct 10 23:19:40 np0005480824 podman[76717]: 2025-10-11 03:19:40.398993955 +0000 UTC m=+0.052446051 container create f923975eb2fded699a4a55b229f04afc04a32ef48548fd0dacb4e48ed84c3b00 (image=quay.io/ceph/ceph:v18, name=hungry_goldberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:40 np0005480824 systemd[1]: Started libpod-conmon-f923975eb2fded699a4a55b229f04afc04a32ef48548fd0dacb4e48ed84c3b00.scope.
Oct 10 23:19:40 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b07f44e412c8de4a9c7269765c588cec480a5f1764f4693c3cc07bc3d295992/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b07f44e412c8de4a9c7269765c588cec480a5f1764f4693c3cc07bc3d295992/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b07f44e412c8de4a9c7269765c588cec480a5f1764f4693c3cc07bc3d295992/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:40 np0005480824 podman[76717]: 2025-10-11 03:19:40.381751858 +0000 UTC m=+0.035203954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:40 np0005480824 podman[76717]: 2025-10-11 03:19:40.495262487 +0000 UTC m=+0.148714613 container init f923975eb2fded699a4a55b229f04afc04a32ef48548fd0dacb4e48ed84c3b00 (image=quay.io/ceph/ceph:v18, name=hungry_goldberg, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:19:40 np0005480824 podman[76717]: 2025-10-11 03:19:40.501077311 +0000 UTC m=+0.154529407 container start f923975eb2fded699a4a55b229f04afc04a32ef48548fd0dacb4e48ed84c3b00 (image=quay.io/ceph/ceph:v18, name=hungry_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:40 np0005480824 podman[76717]: 2025-10-11 03:19:40.504650676 +0000 UTC m=+0.158102762 container attach f923975eb2fded699a4a55b229f04afc04a32ef48548fd0dacb4e48ed84c3b00 (image=quay.io/ceph/ceph:v18, name=hungry_goldberg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:19:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:19:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:41 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:19:41 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct 10 23:19:41 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct 10 23:19:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 10 23:19:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:41 np0005480824 hungry_goldberg[76781]: Scheduled mgr update...
Oct 10 23:19:41 np0005480824 systemd[1]: libpod-f923975eb2fded699a4a55b229f04afc04a32ef48548fd0dacb4e48ed84c3b00.scope: Deactivated successfully.
Oct 10 23:19:41 np0005480824 podman[76717]: 2025-10-11 03:19:41.049576051 +0000 UTC m=+0.703028147 container died f923975eb2fded699a4a55b229f04afc04a32ef48548fd0dacb4e48ed84c3b00 (image=quay.io/ceph/ceph:v18, name=hungry_goldberg, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:41 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3b07f44e412c8de4a9c7269765c588cec480a5f1764f4693c3cc07bc3d295992-merged.mount: Deactivated successfully.
Oct 10 23:19:41 np0005480824 podman[76717]: 2025-10-11 03:19:41.091890192 +0000 UTC m=+0.745342288 container remove f923975eb2fded699a4a55b229f04afc04a32ef48548fd0dacb4e48ed84c3b00 (image=quay.io/ceph/ceph:v18, name=hungry_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:19:41 np0005480824 systemd[1]: libpod-conmon-f923975eb2fded699a4a55b229f04afc04a32ef48548fd0dacb4e48ed84c3b00.scope: Deactivated successfully.
Oct 10 23:19:41 np0005480824 podman[76967]: 2025-10-11 03:19:41.154127013 +0000 UTC m=+0.038302567 container create 5d38c23053230648ca12c8c77e77f2a5603e5bb11bd1a07689e393812ee99694 (image=quay.io/ceph/ceph:v18, name=goofy_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:19:41 np0005480824 systemd[1]: Started libpod-conmon-5d38c23053230648ca12c8c77e77f2a5603e5bb11bd1a07689e393812ee99694.scope.
Oct 10 23:19:41 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:41 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4d72edaf889591fc501a7ca769836d833ab87a81783c35bfaa8909c43fbac9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:41 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4d72edaf889591fc501a7ca769836d833ab87a81783c35bfaa8909c43fbac9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:41 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e4d72edaf889591fc501a7ca769836d833ab87a81783c35bfaa8909c43fbac9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:41 np0005480824 ceph-mon[74326]: Saving service mon spec with placement count:5
Oct 10 23:19:41 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:41 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:41 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:41 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:41 np0005480824 podman[76967]: 2025-10-11 03:19:41.137902562 +0000 UTC m=+0.022078136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:41 np0005480824 podman[76967]: 2025-10-11 03:19:41.236060874 +0000 UTC m=+0.120236448 container init 5d38c23053230648ca12c8c77e77f2a5603e5bb11bd1a07689e393812ee99694 (image=quay.io/ceph/ceph:v18, name=goofy_greider, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:19:41 np0005480824 podman[76967]: 2025-10-11 03:19:41.243230704 +0000 UTC m=+0.127406258 container start 5d38c23053230648ca12c8c77e77f2a5603e5bb11bd1a07689e393812ee99694 (image=quay.io/ceph/ceph:v18, name=goofy_greider, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 10 23:19:41 np0005480824 podman[76967]: 2025-10-11 03:19:41.246614674 +0000 UTC m=+0.130790278 container attach 5d38c23053230648ca12c8c77e77f2a5603e5bb11bd1a07689e393812ee99694 (image=quay.io/ceph/ceph:v18, name=goofy_greider, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 10 23:19:41 np0005480824 podman[77087]: 2025-10-11 03:19:41.610083882 +0000 UTC m=+0.072849236 container exec a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:41 np0005480824 ceph-mgr[74617]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 23:19:41 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:19:41 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Saving service crash spec with placement *
Oct 10 23:19:41 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct 10 23:19:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 10 23:19:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:41 np0005480824 goofy_greider[77009]: Scheduled crash update...
Oct 10 23:19:41 np0005480824 systemd[1]: libpod-5d38c23053230648ca12c8c77e77f2a5603e5bb11bd1a07689e393812ee99694.scope: Deactivated successfully.
Oct 10 23:19:41 np0005480824 conmon[77009]: conmon 5d38c23053230648ca12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d38c23053230648ca12c8c77e77f2a5603e5bb11bd1a07689e393812ee99694.scope/container/memory.events
Oct 10 23:19:41 np0005480824 podman[76967]: 2025-10-11 03:19:41.833250638 +0000 UTC m=+0.717426192 container died 5d38c23053230648ca12c8c77e77f2a5603e5bb11bd1a07689e393812ee99694 (image=quay.io/ceph/ceph:v18, name=goofy_greider, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 23:19:41 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3e4d72edaf889591fc501a7ca769836d833ab87a81783c35bfaa8909c43fbac9-merged.mount: Deactivated successfully.
Oct 10 23:19:41 np0005480824 podman[76967]: 2025-10-11 03:19:41.88398625 +0000 UTC m=+0.768161824 container remove 5d38c23053230648ca12c8c77e77f2a5603e5bb11bd1a07689e393812ee99694 (image=quay.io/ceph/ceph:v18, name=goofy_greider, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:19:41 np0005480824 systemd[1]: libpod-conmon-5d38c23053230648ca12c8c77e77f2a5603e5bb11bd1a07689e393812ee99694.scope: Deactivated successfully.
Oct 10 23:19:41 np0005480824 podman[77087]: 2025-10-11 03:19:41.91392355 +0000 UTC m=+0.376688894 container exec_died a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 23:19:41 np0005480824 podman[77141]: 2025-10-11 03:19:41.954665412 +0000 UTC m=+0.051161793 container create ce1945ddff52912a56f580202439eb5413a580ac93692106412f7c868b0e5bc6 (image=quay.io/ceph/ceph:v18, name=stoic_gould, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 10 23:19:41 np0005480824 systemd[1]: Started libpod-conmon-ce1945ddff52912a56f580202439eb5413a580ac93692106412f7c868b0e5bc6.scope.
Oct 10 23:19:42 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af4efb4486f6d87252f406189d133bcba1b165442fac85653e733c67794acf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af4efb4486f6d87252f406189d133bcba1b165442fac85653e733c67794acf4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af4efb4486f6d87252f406189d133bcba1b165442fac85653e733c67794acf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:42 np0005480824 podman[77141]: 2025-10-11 03:19:41.937165241 +0000 UTC m=+0.033661632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:42 np0005480824 podman[77141]: 2025-10-11 03:19:42.039126376 +0000 UTC m=+0.135622757 container init ce1945ddff52912a56f580202439eb5413a580ac93692106412f7c868b0e5bc6 (image=quay.io/ceph/ceph:v18, name=stoic_gould, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:19:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:42 np0005480824 podman[77141]: 2025-10-11 03:19:42.04674571 +0000 UTC m=+0.143242091 container start ce1945ddff52912a56f580202439eb5413a580ac93692106412f7c868b0e5bc6 (image=quay.io/ceph/ceph:v18, name=stoic_gould, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 23:19:42 np0005480824 podman[77141]: 2025-10-11 03:19:42.051578486 +0000 UTC m=+0.148074867 container attach ce1945ddff52912a56f580202439eb5413a580ac93692106412f7c868b0e5bc6 (image=quay.io/ceph/ceph:v18, name=stoic_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:19:42 np0005480824 ceph-mon[74326]: Saving service mgr spec with placement count:2
Oct 10 23:19:42 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:42 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:42 np0005480824 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77323 (sysctl)
Oct 10 23:19:42 np0005480824 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 10 23:19:42 np0005480824 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 10 23:19:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct 10 23:19:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2026784618' entity='client.admin' 
Oct 10 23:19:42 np0005480824 systemd[1]: libpod-ce1945ddff52912a56f580202439eb5413a580ac93692106412f7c868b0e5bc6.scope: Deactivated successfully.
Oct 10 23:19:42 np0005480824 podman[77141]: 2025-10-11 03:19:42.58278303 +0000 UTC m=+0.679279451 container died ce1945ddff52912a56f580202439eb5413a580ac93692106412f7c868b0e5bc6 (image=quay.io/ceph/ceph:v18, name=stoic_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct 10 23:19:42 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9af4efb4486f6d87252f406189d133bcba1b165442fac85653e733c67794acf4-merged.mount: Deactivated successfully.
Oct 10 23:19:42 np0005480824 podman[77141]: 2025-10-11 03:19:42.649512758 +0000 UTC m=+0.746009169 container remove ce1945ddff52912a56f580202439eb5413a580ac93692106412f7c868b0e5bc6 (image=quay.io/ceph/ceph:v18, name=stoic_gould, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 10 23:19:42 np0005480824 systemd[1]: libpod-conmon-ce1945ddff52912a56f580202439eb5413a580ac93692106412f7c868b0e5bc6.scope: Deactivated successfully.
Oct 10 23:19:42 np0005480824 podman[77344]: 2025-10-11 03:19:42.739974166 +0000 UTC m=+0.061710827 container create 1d26dae6782e1e598e681861a0f5d80426e5c838490a93a209624cfaf07d0c98 (image=quay.io/ceph/ceph:v18, name=optimistic_wozniak, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 10 23:19:42 np0005480824 systemd[1]: Started libpod-conmon-1d26dae6782e1e598e681861a0f5d80426e5c838490a93a209624cfaf07d0c98.scope.
Oct 10 23:19:42 np0005480824 podman[77344]: 2025-10-11 03:19:42.707175917 +0000 UTC m=+0.028912618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:42 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b8aadc61da2638abed27e0cd3cc5f5ba0d7ab0ad306c7537f5c93c5b84b4fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b8aadc61da2638abed27e0cd3cc5f5ba0d7ab0ad306c7537f5c93c5b84b4fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b8aadc61da2638abed27e0cd3cc5f5ba0d7ab0ad306c7537f5c93c5b84b4fc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:42 np0005480824 podman[77344]: 2025-10-11 03:19:42.857868915 +0000 UTC m=+0.179605546 container init 1d26dae6782e1e598e681861a0f5d80426e5c838490a93a209624cfaf07d0c98 (image=quay.io/ceph/ceph:v18, name=optimistic_wozniak, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:19:42 np0005480824 podman[77344]: 2025-10-11 03:19:42.867851896 +0000 UTC m=+0.189588557 container start 1d26dae6782e1e598e681861a0f5d80426e5c838490a93a209624cfaf07d0c98 (image=quay.io/ceph/ceph:v18, name=optimistic_wozniak, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 10 23:19:42 np0005480824 podman[77344]: 2025-10-11 03:19:42.8725801 +0000 UTC m=+0.194316751 container attach 1d26dae6782e1e598e681861a0f5d80426e5c838490a93a209624cfaf07d0c98 (image=quay.io/ceph/ceph:v18, name=optimistic_wozniak, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 10 23:19:43 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:19:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct 10 23:19:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:43 np0005480824 systemd[1]: libpod-1d26dae6782e1e598e681861a0f5d80426e5c838490a93a209624cfaf07d0c98.scope: Deactivated successfully.
Oct 10 23:19:43 np0005480824 podman[77344]: 2025-10-11 03:19:43.460054779 +0000 UTC m=+0.781791400 container died 1d26dae6782e1e598e681861a0f5d80426e5c838490a93a209624cfaf07d0c98 (image=quay.io/ceph/ceph:v18, name=optimistic_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 23:19:43 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d5b8aadc61da2638abed27e0cd3cc5f5ba0d7ab0ad306c7537f5c93c5b84b4fc-merged.mount: Deactivated successfully.
Oct 10 23:19:43 np0005480824 podman[77344]: 2025-10-11 03:19:43.530894096 +0000 UTC m=+0.852630737 container remove 1d26dae6782e1e598e681861a0f5d80426e5c838490a93a209624cfaf07d0c98 (image=quay.io/ceph/ceph:v18, name=optimistic_wozniak, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:19:43 np0005480824 systemd[1]: libpod-conmon-1d26dae6782e1e598e681861a0f5d80426e5c838490a93a209624cfaf07d0c98.scope: Deactivated successfully.
Oct 10 23:19:43 np0005480824 ceph-mon[74326]: Saving service crash spec with placement *
Oct 10 23:19:43 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/2026784618' entity='client.admin' 
Oct 10 23:19:43 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:19:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:19:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:43 np0005480824 podman[77534]: 2025-10-11 03:19:43.610030652 +0000 UTC m=+0.051382959 container create e4b35d38a0dbb9faf2ca515ebd5f1d981f63e343227f77299dc7e27a085f1151 (image=quay.io/ceph/ceph:v18, name=ecstatic_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 10 23:19:43 np0005480824 systemd[1]: Started libpod-conmon-e4b35d38a0dbb9faf2ca515ebd5f1d981f63e343227f77299dc7e27a085f1151.scope.
Oct 10 23:19:43 np0005480824 podman[77534]: 2025-10-11 03:19:43.582486598 +0000 UTC m=+0.023838985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:43 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45327c6c2ed8cacbdcfcb8fe8eb9986c7292e01c6ecee88575fa5be9ddecc446/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45327c6c2ed8cacbdcfcb8fe8eb9986c7292e01c6ecee88575fa5be9ddecc446/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45327c6c2ed8cacbdcfcb8fe8eb9986c7292e01c6ecee88575fa5be9ddecc446/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:43 np0005480824 podman[77534]: 2025-10-11 03:19:43.706405623 +0000 UTC m=+0.147758010 container init e4b35d38a0dbb9faf2ca515ebd5f1d981f63e343227f77299dc7e27a085f1151 (image=quay.io/ceph/ceph:v18, name=ecstatic_booth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:19:43 np0005480824 podman[77534]: 2025-10-11 03:19:43.71997878 +0000 UTC m=+0.161331107 container start e4b35d38a0dbb9faf2ca515ebd5f1d981f63e343227f77299dc7e27a085f1151 (image=quay.io/ceph/ceph:v18, name=ecstatic_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 10 23:19:43 np0005480824 podman[77534]: 2025-10-11 03:19:43.729754295 +0000 UTC m=+0.171106632 container attach e4b35d38a0dbb9faf2ca515ebd5f1d981f63e343227f77299dc7e27a085f1151 (image=quay.io/ceph/ceph:v18, name=ecstatic_booth, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:19:43 np0005480824 ceph-mgr[74617]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 23:19:44 np0005480824 podman[77713]: 2025-10-11 03:19:44.252990088 +0000 UTC m=+0.039994895 container create 0ed0363dc5660348386c800801033ff1d3693e3c6647171034c2c81dcd38cebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_knuth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 10 23:19:44 np0005480824 systemd[1]: Started libpod-conmon-0ed0363dc5660348386c800801033ff1d3693e3c6647171034c2c81dcd38cebf.scope.
Oct 10 23:19:44 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:44 np0005480824 podman[77713]: 2025-10-11 03:19:44.234466252 +0000 UTC m=+0.021471069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:19:44 np0005480824 podman[77713]: 2025-10-11 03:19:44.338791864 +0000 UTC m=+0.125796671 container init 0ed0363dc5660348386c800801033ff1d3693e3c6647171034c2c81dcd38cebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 10 23:19:44 np0005480824 podman[77713]: 2025-10-11 03:19:44.346202082 +0000 UTC m=+0.133206869 container start 0ed0363dc5660348386c800801033ff1d3693e3c6647171034c2c81dcd38cebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 10 23:19:44 np0005480824 intelligent_knuth[77731]: 167 167
Oct 10 23:19:44 np0005480824 systemd[1]: libpod-0ed0363dc5660348386c800801033ff1d3693e3c6647171034c2c81dcd38cebf.scope: Deactivated successfully.
Oct 10 23:19:44 np0005480824 podman[77713]: 2025-10-11 03:19:44.353417856 +0000 UTC m=+0.140422693 container attach 0ed0363dc5660348386c800801033ff1d3693e3c6647171034c2c81dcd38cebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 10 23:19:44 np0005480824 podman[77713]: 2025-10-11 03:19:44.35398652 +0000 UTC m=+0.140991317 container died 0ed0363dc5660348386c800801033ff1d3693e3c6647171034c2c81dcd38cebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 10 23:19:44 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:19:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 10 23:19:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:44 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Added label _admin to host compute-0
Oct 10 23:19:44 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct 10 23:19:44 np0005480824 ecstatic_booth[77573]: Added label _admin to host compute-0
Oct 10 23:19:44 np0005480824 systemd[1]: var-lib-containers-storage-overlay-4415ced712e2c1d849cecbe85f4bacbc0f9afb157724c141b9289496dbd5cd68-merged.mount: Deactivated successfully.
Oct 10 23:19:44 np0005480824 podman[77713]: 2025-10-11 03:19:44.398923222 +0000 UTC m=+0.185928009 container remove 0ed0363dc5660348386c800801033ff1d3693e3c6647171034c2c81dcd38cebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_knuth, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 10 23:19:44 np0005480824 systemd[1]: libpod-conmon-0ed0363dc5660348386c800801033ff1d3693e3c6647171034c2c81dcd38cebf.scope: Deactivated successfully.
Oct 10 23:19:44 np0005480824 systemd[1]: libpod-e4b35d38a0dbb9faf2ca515ebd5f1d981f63e343227f77299dc7e27a085f1151.scope: Deactivated successfully.
Oct 10 23:19:44 np0005480824 podman[77534]: 2025-10-11 03:19:44.416370102 +0000 UTC m=+0.857722419 container died e4b35d38a0dbb9faf2ca515ebd5f1d981f63e343227f77299dc7e27a085f1151 (image=quay.io/ceph/ceph:v18, name=ecstatic_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 23:19:44 np0005480824 systemd[1]: var-lib-containers-storage-overlay-45327c6c2ed8cacbdcfcb8fe8eb9986c7292e01c6ecee88575fa5be9ddecc446-merged.mount: Deactivated successfully.
Oct 10 23:19:44 np0005480824 podman[77534]: 2025-10-11 03:19:44.467010213 +0000 UTC m=+0.908362510 container remove e4b35d38a0dbb9faf2ca515ebd5f1d981f63e343227f77299dc7e27a085f1151 (image=quay.io/ceph/ceph:v18, name=ecstatic_booth, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 10 23:19:44 np0005480824 systemd[1]: libpod-conmon-e4b35d38a0dbb9faf2ca515ebd5f1d981f63e343227f77299dc7e27a085f1151.scope: Deactivated successfully.
Oct 10 23:19:44 np0005480824 podman[77767]: 2025-10-11 03:19:44.539275783 +0000 UTC m=+0.049229607 container create 899701d8813dbe056f527a44b3cf76ebf9df035b412168f6e8e4eb2204e2216e (image=quay.io/ceph/ceph:v18, name=beautiful_jackson, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:44 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:44 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:44 np0005480824 systemd[1]: Started libpod-conmon-899701d8813dbe056f527a44b3cf76ebf9df035b412168f6e8e4eb2204e2216e.scope.
Oct 10 23:19:44 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff991e985ec792f89eb7884358a668eb3ffff623d9c3f066d7515127dd3282f9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff991e985ec792f89eb7884358a668eb3ffff623d9c3f066d7515127dd3282f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff991e985ec792f89eb7884358a668eb3ffff623d9c3f066d7515127dd3282f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:44 np0005480824 podman[77767]: 2025-10-11 03:19:44.516022243 +0000 UTC m=+0.025976087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:44 np0005480824 podman[77767]: 2025-10-11 03:19:44.624973446 +0000 UTC m=+0.134927260 container init 899701d8813dbe056f527a44b3cf76ebf9df035b412168f6e8e4eb2204e2216e (image=quay.io/ceph/ceph:v18, name=beautiful_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 10 23:19:44 np0005480824 podman[77767]: 2025-10-11 03:19:44.636862343 +0000 UTC m=+0.146816137 container start 899701d8813dbe056f527a44b3cf76ebf9df035b412168f6e8e4eb2204e2216e (image=quay.io/ceph/ceph:v18, name=beautiful_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:19:44 np0005480824 podman[77767]: 2025-10-11 03:19:44.640509461 +0000 UTC m=+0.150463275 container attach 899701d8813dbe056f527a44b3cf76ebf9df035b412168f6e8e4eb2204e2216e (image=quay.io/ceph/ceph:v18, name=beautiful_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 10 23:19:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct 10 23:19:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4005913559' entity='client.admin' 
Oct 10 23:19:45 np0005480824 systemd[1]: libpod-899701d8813dbe056f527a44b3cf76ebf9df035b412168f6e8e4eb2204e2216e.scope: Deactivated successfully.
Oct 10 23:19:45 np0005480824 podman[77767]: 2025-10-11 03:19:45.237910699 +0000 UTC m=+0.747864503 container died 899701d8813dbe056f527a44b3cf76ebf9df035b412168f6e8e4eb2204e2216e (image=quay.io/ceph/ceph:v18, name=beautiful_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:19:45 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ff991e985ec792f89eb7884358a668eb3ffff623d9c3f066d7515127dd3282f9-merged.mount: Deactivated successfully.
Oct 10 23:19:45 np0005480824 podman[77767]: 2025-10-11 03:19:45.285792242 +0000 UTC m=+0.795746036 container remove 899701d8813dbe056f527a44b3cf76ebf9df035b412168f6e8e4eb2204e2216e (image=quay.io/ceph/ceph:v18, name=beautiful_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:45 np0005480824 systemd[1]: libpod-conmon-899701d8813dbe056f527a44b3cf76ebf9df035b412168f6e8e4eb2204e2216e.scope: Deactivated successfully.
Oct 10 23:19:45 np0005480824 podman[77822]: 2025-10-11 03:19:45.343528644 +0000 UTC m=+0.040360924 container create c9a1d2001cf5cc27866730b07b995f35018945f8a88b116154a1785a3e6cd4dc (image=quay.io/ceph/ceph:v18, name=recursing_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 23:19:45 np0005480824 systemd[1]: Started libpod-conmon-c9a1d2001cf5cc27866730b07b995f35018945f8a88b116154a1785a3e6cd4dc.scope.
Oct 10 23:19:45 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb04bb12c153e301da07dac99e045c6541b3e7bdd8e1285449b1d786d1ba747a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb04bb12c153e301da07dac99e045c6541b3e7bdd8e1285449b1d786d1ba747a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb04bb12c153e301da07dac99e045c6541b3e7bdd8e1285449b1d786d1ba747a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:45 np0005480824 podman[77822]: 2025-10-11 03:19:45.323821529 +0000 UTC m=+0.020653839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:45 np0005480824 podman[77822]: 2025-10-11 03:19:45.435126819 +0000 UTC m=+0.131959149 container init c9a1d2001cf5cc27866730b07b995f35018945f8a88b116154a1785a3e6cd4dc (image=quay.io/ceph/ceph:v18, name=recursing_edison, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct 10 23:19:45 np0005480824 podman[77822]: 2025-10-11 03:19:45.441488783 +0000 UTC m=+0.138321063 container start c9a1d2001cf5cc27866730b07b995f35018945f8a88b116154a1785a3e6cd4dc (image=quay.io/ceph/ceph:v18, name=recursing_edison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:19:45 np0005480824 podman[77822]: 2025-10-11 03:19:45.455366817 +0000 UTC m=+0.152199157 container attach c9a1d2001cf5cc27866730b07b995f35018945f8a88b116154a1785a3e6cd4dc (image=quay.io/ceph/ceph:v18, name=recursing_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 23:19:45 np0005480824 ceph-mon[74326]: Added label _admin to host compute-0
Oct 10 23:19:45 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/4005913559' entity='client.admin' 
Oct 10 23:19:45 np0005480824 ceph-mgr[74617]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 10 23:19:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct 10 23:19:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/335687843' entity='client.admin' 
Oct 10 23:19:46 np0005480824 recursing_edison[77838]: set mgr/dashboard/cluster/status
Oct 10 23:19:46 np0005480824 systemd[1]: libpod-c9a1d2001cf5cc27866730b07b995f35018945f8a88b116154a1785a3e6cd4dc.scope: Deactivated successfully.
Oct 10 23:19:46 np0005480824 podman[77822]: 2025-10-11 03:19:46.139119056 +0000 UTC m=+0.835951346 container died c9a1d2001cf5cc27866730b07b995f35018945f8a88b116154a1785a3e6cd4dc (image=quay.io/ceph/ceph:v18, name=recursing_edison, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:46 np0005480824 systemd[1]: var-lib-containers-storage-overlay-eb04bb12c153e301da07dac99e045c6541b3e7bdd8e1285449b1d786d1ba747a-merged.mount: Deactivated successfully.
Oct 10 23:19:46 np0005480824 podman[77822]: 2025-10-11 03:19:46.181106287 +0000 UTC m=+0.877938607 container remove c9a1d2001cf5cc27866730b07b995f35018945f8a88b116154a1785a3e6cd4dc (image=quay.io/ceph/ceph:v18, name=recursing_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:46 np0005480824 systemd[1]: libpod-conmon-c9a1d2001cf5cc27866730b07b995f35018945f8a88b116154a1785a3e6cd4dc.scope: Deactivated successfully.
Oct 10 23:19:46 np0005480824 podman[77883]: 2025-10-11 03:19:46.446021347 +0000 UTC m=+0.072008405 container create 4a42f0b1c1ada03482f7ba4f772f108197bf721ffecd584da5c98f73417ae345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_franklin, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:19:46 np0005480824 systemd[1]: Started libpod-conmon-4a42f0b1c1ada03482f7ba4f772f108197bf721ffecd584da5c98f73417ae345.scope.
Oct 10 23:19:46 np0005480824 podman[77883]: 2025-10-11 03:19:46.414204861 +0000 UTC m=+0.040191979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:19:46 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/079aea4f60b79cba26ca5283b9c8c5165be8a729cef5dd2caeafbd46dda1b5dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/079aea4f60b79cba26ca5283b9c8c5165be8a729cef5dd2caeafbd46dda1b5dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/079aea4f60b79cba26ca5283b9c8c5165be8a729cef5dd2caeafbd46dda1b5dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/079aea4f60b79cba26ca5283b9c8c5165be8a729cef5dd2caeafbd46dda1b5dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:46 np0005480824 podman[77883]: 2025-10-11 03:19:46.553811063 +0000 UTC m=+0.179798131 container init 4a42f0b1c1ada03482f7ba4f772f108197bf721ffecd584da5c98f73417ae345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:46 np0005480824 podman[77883]: 2025-10-11 03:19:46.560763381 +0000 UTC m=+0.186750419 container start 4a42f0b1c1ada03482f7ba4f772f108197bf721ffecd584da5c98f73417ae345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_franklin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:19:46 np0005480824 podman[77883]: 2025-10-11 03:19:46.565640928 +0000 UTC m=+0.191628006 container attach 4a42f0b1c1ada03482f7ba4f772f108197bf721ffecd584da5c98f73417ae345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:19:46 np0005480824 python3[77929]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:19:46 np0005480824 podman[77930]: 2025-10-11 03:19:46.795652948 +0000 UTC m=+0.029836740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:46 np0005480824 podman[77930]: 2025-10-11 03:19:46.998512754 +0000 UTC m=+0.232696506 container create 687c65e1de5380b254ec95fefc62204c8db8408f81429e4489dd7eea1925fe31 (image=quay.io/ceph/ceph:v18, name=bold_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 23:19:47 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/335687843' entity='client.admin' 
Oct 10 23:19:47 np0005480824 systemd[1]: Started libpod-conmon-687c65e1de5380b254ec95fefc62204c8db8408f81429e4489dd7eea1925fe31.scope.
Oct 10 23:19:47 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc30270d098c26d641e67c7b3b8ea17ec48078acd096d312f57bd329b2960da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc30270d098c26d641e67c7b3b8ea17ec48078acd096d312f57bd329b2960da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:47 np0005480824 podman[77930]: 2025-10-11 03:19:47.212926678 +0000 UTC m=+0.447110520 container init 687c65e1de5380b254ec95fefc62204c8db8408f81429e4489dd7eea1925fe31 (image=quay.io/ceph/ceph:v18, name=bold_agnesi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 23:19:47 np0005480824 podman[77930]: 2025-10-11 03:19:47.221629418 +0000 UTC m=+0.455813200 container start 687c65e1de5380b254ec95fefc62204c8db8408f81429e4489dd7eea1925fe31 (image=quay.io/ceph/ceph:v18, name=bold_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:19:47 np0005480824 podman[77930]: 2025-10-11 03:19:47.229662662 +0000 UTC m=+0.463846494 container attach 687c65e1de5380b254ec95fefc62204c8db8408f81429e4489dd7eea1925fe31 (image=quay.io/ceph/ceph:v18, name=bold_agnesi, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:47 np0005480824 ceph-mgr[74617]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct 10 23:19:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:19:47 np0005480824 ceph-mon[74326]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 10 23:19:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct 10 23:19:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2179296696' entity='client.admin' 
Oct 10 23:19:47 np0005480824 systemd[1]: libpod-687c65e1de5380b254ec95fefc62204c8db8408f81429e4489dd7eea1925fe31.scope: Deactivated successfully.
Oct 10 23:19:47 np0005480824 podman[77930]: 2025-10-11 03:19:47.836894426 +0000 UTC m=+1.071078168 container died 687c65e1de5380b254ec95fefc62204c8db8408f81429e4489dd7eea1925fe31 (image=quay.io/ceph/ceph:v18, name=bold_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 10 23:19:47 np0005480824 systemd[1]: var-lib-containers-storage-overlay-bfc30270d098c26d641e67c7b3b8ea17ec48078acd096d312f57bd329b2960da-merged.mount: Deactivated successfully.
Oct 10 23:19:47 np0005480824 podman[77930]: 2025-10-11 03:19:47.884972635 +0000 UTC m=+1.119156377 container remove 687c65e1de5380b254ec95fefc62204c8db8408f81429e4489dd7eea1925fe31 (image=quay.io/ceph/ceph:v18, name=bold_agnesi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:47 np0005480824 systemd[1]: libpod-conmon-687c65e1de5380b254ec95fefc62204c8db8408f81429e4489dd7eea1925fe31.scope: Deactivated successfully.
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/2179296696' entity='client.admin' 
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]: [
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:    {
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:        "available": false,
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:        "ceph_device": false,
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:        "lsm_data": {},
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:        "lvs": [],
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:        "path": "/dev/sr0",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:        "rejected_reasons": [
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "Has a FileSystem",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "Insufficient space (<5GB)"
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:        ],
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:        "sys_api": {
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "actuators": null,
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "device_nodes": "sr0",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "devname": "sr0",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "human_readable_size": "482.00 KB",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "id_bus": "ata",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "model": "QEMU DVD-ROM",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "nr_requests": "2",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "parent": "/dev/sr0",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "partitions": {},
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "path": "/dev/sr0",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "removable": "1",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "rev": "2.5+",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "ro": "0",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "rotational": "0",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "sas_address": "",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "sas_device_handle": "",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "scheduler_mode": "mq-deadline",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "sectors": 0,
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "sectorsize": "2048",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "size": 493568.0,
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "support_discard": "2048",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "type": "disk",
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:            "vendor": "QEMU"
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:        }
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]:    }
Oct 10 23:19:48 np0005480824 infallible_franklin[77899]: ]
Oct 10 23:19:48 np0005480824 systemd[1]: libpod-4a42f0b1c1ada03482f7ba4f772f108197bf721ffecd584da5c98f73417ae345.scope: Deactivated successfully.
Oct 10 23:19:48 np0005480824 systemd[1]: libpod-4a42f0b1c1ada03482f7ba4f772f108197bf721ffecd584da5c98f73417ae345.scope: Consumed 1.671s CPU time.
Oct 10 23:19:48 np0005480824 podman[77883]: 2025-10-11 03:19:48.185082903 +0000 UTC m=+1.811069951 container died 4a42f0b1c1ada03482f7ba4f772f108197bf721ffecd584da5c98f73417ae345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:19:48 np0005480824 systemd[1]: var-lib-containers-storage-overlay-079aea4f60b79cba26ca5283b9c8c5165be8a729cef5dd2caeafbd46dda1b5dc-merged.mount: Deactivated successfully.
Oct 10 23:19:48 np0005480824 podman[77883]: 2025-10-11 03:19:48.238743985 +0000 UTC m=+1.864731043 container remove 4a42f0b1c1ada03482f7ba4f772f108197bf721ffecd584da5c98f73417ae345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_franklin, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:48 np0005480824 systemd[1]: libpod-conmon-4a42f0b1c1ada03482f7ba4f772f108197bf721ffecd584da5c98f73417ae345.scope: Deactivated successfully.
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:19:48 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 10 23:19:48 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 10 23:19:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:19:48 np0005480824 ansible-async_wrapper.py[80202]: Invoked with j173977449056 30 /home/zuul/.ansible/tmp/ansible-tmp-1760152788.30087-33032-225440474030427/AnsiballZ_command.py _
Oct 10 23:19:48 np0005480824 ansible-async_wrapper.py[80253]: Starting module and watcher
Oct 10 23:19:48 np0005480824 ansible-async_wrapper.py[80253]: Start watching 80256 (30)
Oct 10 23:19:48 np0005480824 ansible-async_wrapper.py[80256]: Start module (80256)
Oct 10 23:19:48 np0005480824 ansible-async_wrapper.py[80202]: Return async_wrapper task started.
Oct 10 23:19:49 np0005480824 python3[80257]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:19:49 np0005480824 podman[80311]: 2025-10-11 03:19:49.183428388 +0000 UTC m=+0.057317661 container create 923c5ea3fc1afa37232c7c261f07b077be7eee0bc88cc184f2b6390f32f691f8 (image=quay.io/ceph/ceph:v18, name=unruffled_hermann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 10 23:19:49 np0005480824 systemd[1]: Started libpod-conmon-923c5ea3fc1afa37232c7c261f07b077be7eee0bc88cc184f2b6390f32f691f8.scope.
Oct 10 23:19:49 np0005480824 podman[80311]: 2025-10-11 03:19:49.160900426 +0000 UTC m=+0.034789719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:49 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:49 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c9e8d9d2a6c911fa9f71ef415cb74863f1e67e977d39c9bf5a03f5f8425e9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:49 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c9e8d9d2a6c911fa9f71ef415cb74863f1e67e977d39c9bf5a03f5f8425e9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:49 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:49 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:49 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:49 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:49 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 23:19:49 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:19:49 np0005480824 ceph-mon[74326]: Updating compute-0:/etc/ceph/ceph.conf
Oct 10 23:19:49 np0005480824 podman[80311]: 2025-10-11 03:19:49.307073746 +0000 UTC m=+0.180963099 container init 923c5ea3fc1afa37232c7c261f07b077be7eee0bc88cc184f2b6390f32f691f8 (image=quay.io/ceph/ceph:v18, name=unruffled_hermann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 10 23:19:49 np0005480824 podman[80311]: 2025-10-11 03:19:49.318345278 +0000 UTC m=+0.192234581 container start 923c5ea3fc1afa37232c7c261f07b077be7eee0bc88cc184f2b6390f32f691f8 (image=quay.io/ceph/ceph:v18, name=unruffled_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:19:49 np0005480824 podman[80311]: 2025-10-11 03:19:49.324464085 +0000 UTC m=+0.198353438 container attach 923c5ea3fc1afa37232c7c261f07b077be7eee0bc88cc184f2b6390f32f691f8 (image=quay.io/ceph/ceph:v18, name=unruffled_hermann, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 23:19:49 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/92cfe4d4-4917-5be1-9d00-73758793a62b/config/ceph.conf
Oct 10 23:19:49 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/92cfe4d4-4917-5be1-9d00-73758793a62b/config/ceph.conf
Oct 10 23:19:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:19:49 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 23:19:49 np0005480824 unruffled_hermann[80372]: 
Oct 10 23:19:49 np0005480824 unruffled_hermann[80372]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 10 23:19:49 np0005480824 systemd[1]: libpod-923c5ea3fc1afa37232c7c261f07b077be7eee0bc88cc184f2b6390f32f691f8.scope: Deactivated successfully.
Oct 10 23:19:49 np0005480824 podman[80311]: 2025-10-11 03:19:49.923269418 +0000 UTC m=+0.797158681 container died 923c5ea3fc1afa37232c7c261f07b077be7eee0bc88cc184f2b6390f32f691f8 (image=quay.io/ceph/ceph:v18, name=unruffled_hermann, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 10 23:19:50 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e5c9e8d9d2a6c911fa9f71ef415cb74863f1e67e977d39c9bf5a03f5f8425e9e-merged.mount: Deactivated successfully.
Oct 10 23:19:50 np0005480824 python3[80783]: ansible-ansible.legacy.async_status Invoked with jid=j173977449056.80202 mode=status _async_dir=/root/.ansible_async
Oct 10 23:19:50 np0005480824 podman[80311]: 2025-10-11 03:19:50.47972676 +0000 UTC m=+1.353616013 container remove 923c5ea3fc1afa37232c7c261f07b077be7eee0bc88cc184f2b6390f32f691f8 (image=quay.io/ceph/ceph:v18, name=unruffled_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 10 23:19:50 np0005480824 systemd[1]: libpod-conmon-923c5ea3fc1afa37232c7c261f07b077be7eee0bc88cc184f2b6390f32f691f8.scope: Deactivated successfully.
Oct 10 23:19:50 np0005480824 ansible-async_wrapper.py[80256]: Module complete (80256)
Oct 10 23:19:50 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 23:19:50 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 23:19:51 np0005480824 ceph-mon[74326]: Updating compute-0:/var/lib/ceph/92cfe4d4-4917-5be1-9d00-73758793a62b/config/ceph.conf
Oct 10 23:19:51 np0005480824 python3[81261]: ansible-ansible.legacy.async_status Invoked with jid=j173977449056.80202 mode=status _async_dir=/root/.ansible_async
Oct 10 23:19:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:19:52 np0005480824 python3[81448]: ansible-ansible.legacy.async_status Invoked with jid=j173977449056.80202 mode=cleanup _async_dir=/root/.ansible_async
Oct 10 23:19:52 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/92cfe4d4-4917-5be1-9d00-73758793a62b/config/ceph.client.admin.keyring
Oct 10 23:19:52 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/92cfe4d4-4917-5be1-9d00-73758793a62b/config/ceph.client.admin.keyring
Oct 10 23:19:52 np0005480824 ceph-mon[74326]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 23:19:52 np0005480824 python3[81674]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:19:53 np0005480824 python3[81877]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:19:53 np0005480824 podman[81926]: 2025-10-11 03:19:53.265312151 +0000 UTC m=+0.055865276 container create 6207cde807c80b032d05c88cb4a07f3d05720a67e8c5fcb14ed48db22de918f6 (image=quay.io/ceph/ceph:v18, name=quirky_archimedes, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:19:53 np0005480824 systemd[1]: Started libpod-conmon-6207cde807c80b032d05c88cb4a07f3d05720a67e8c5fcb14ed48db22de918f6.scope.
Oct 10 23:19:53 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:53 np0005480824 podman[81926]: 2025-10-11 03:19:53.243520067 +0000 UTC m=+0.034073202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:53 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b748371ba9e1462a15de883d801b28bc9ccf1e5f8e97f6cc2f887ed8510968/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:53 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b748371ba9e1462a15de883d801b28bc9ccf1e5f8e97f6cc2f887ed8510968/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:53 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6b748371ba9e1462a15de883d801b28bc9ccf1e5f8e97f6cc2f887ed8510968/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:53 np0005480824 podman[81926]: 2025-10-11 03:19:53.361071218 +0000 UTC m=+0.151624373 container init 6207cde807c80b032d05c88cb4a07f3d05720a67e8c5fcb14ed48db22de918f6 (image=quay.io/ceph/ceph:v18, name=quirky_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:53 np0005480824 podman[81926]: 2025-10-11 03:19:53.374232115 +0000 UTC m=+0.164785260 container start 6207cde807c80b032d05c88cb4a07f3d05720a67e8c5fcb14ed48db22de918f6 (image=quay.io/ceph/ceph:v18, name=quirky_archimedes, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:19:53 np0005480824 podman[81926]: 2025-10-11 03:19:53.381706135 +0000 UTC m=+0.172259290 container attach 6207cde807c80b032d05c88cb4a07f3d05720a67e8c5fcb14ed48db22de918f6 (image=quay.io/ceph/ceph:v18, name=quirky_archimedes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:53 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev 6655379c-66f0-4556-ad09-198db6f79f10 (Updating crash deployment (+1 -> 1))
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:19:53 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct 10 23:19:53 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: Updating compute-0:/var/lib/ceph/92cfe4d4-4917-5be1-9d00-73758793a62b/config/ceph.client.admin.keyring
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 10 23:19:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:19:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:19:53 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 23:19:53 np0005480824 quirky_archimedes[81968]: 
Oct 10 23:19:53 np0005480824 quirky_archimedes[81968]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 10 23:19:53 np0005480824 systemd[1]: libpod-6207cde807c80b032d05c88cb4a07f3d05720a67e8c5fcb14ed48db22de918f6.scope: Deactivated successfully.
Oct 10 23:19:53 np0005480824 podman[81926]: 2025-10-11 03:19:53.956942769 +0000 UTC m=+0.747495904 container died 6207cde807c80b032d05c88cb4a07f3d05720a67e8c5fcb14ed48db22de918f6 (image=quay.io/ceph/ceph:v18, name=quirky_archimedes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 23:19:53 np0005480824 ansible-async_wrapper.py[80253]: Done in kid B.
Oct 10 23:19:53 np0005480824 systemd[1]: var-lib-containers-storage-overlay-b6b748371ba9e1462a15de883d801b28bc9ccf1e5f8e97f6cc2f887ed8510968-merged.mount: Deactivated successfully.
Oct 10 23:19:54 np0005480824 podman[81926]: 2025-10-11 03:19:54.021225698 +0000 UTC m=+0.811778843 container remove 6207cde807c80b032d05c88cb4a07f3d05720a67e8c5fcb14ed48db22de918f6 (image=quay.io/ceph/ceph:v18, name=quirky_archimedes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:54 np0005480824 systemd[1]: libpod-conmon-6207cde807c80b032d05c88cb4a07f3d05720a67e8c5fcb14ed48db22de918f6.scope: Deactivated successfully.
Oct 10 23:19:54 np0005480824 podman[82167]: 2025-10-11 03:19:54.1865844 +0000 UTC m=+0.066826110 container create 967fc2792b32107534a76f15c62fd0722b0e7cbad47dc5b582cb5643ea02ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:54 np0005480824 systemd[1]: Started libpod-conmon-967fc2792b32107534a76f15c62fd0722b0e7cbad47dc5b582cb5643ea02ff33.scope.
Oct 10 23:19:54 np0005480824 podman[82167]: 2025-10-11 03:19:54.156571208 +0000 UTC m=+0.036812988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:19:54 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:54 np0005480824 podman[82167]: 2025-10-11 03:19:54.299519151 +0000 UTC m=+0.179760851 container init 967fc2792b32107534a76f15c62fd0722b0e7cbad47dc5b582cb5643ea02ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:54 np0005480824 podman[82167]: 2025-10-11 03:19:54.310922705 +0000 UTC m=+0.191164385 container start 967fc2792b32107534a76f15c62fd0722b0e7cbad47dc5b582cb5643ea02ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 10 23:19:54 np0005480824 podman[82167]: 2025-10-11 03:19:54.314497912 +0000 UTC m=+0.194739602 container attach 967fc2792b32107534a76f15c62fd0722b0e7cbad47dc5b582cb5643ea02ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:54 np0005480824 bold_shannon[82183]: 167 167
Oct 10 23:19:54 np0005480824 systemd[1]: libpod-967fc2792b32107534a76f15c62fd0722b0e7cbad47dc5b582cb5643ea02ff33.scope: Deactivated successfully.
Oct 10 23:19:54 np0005480824 podman[82167]: 2025-10-11 03:19:54.317240188 +0000 UTC m=+0.197481898 container died 967fc2792b32107534a76f15c62fd0722b0e7cbad47dc5b582cb5643ea02ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 23:19:54 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c3bc711726c53e4216c55dd0b9ddd53fbc9010f2d84942cbbb7da0d2ac7d475f-merged.mount: Deactivated successfully.
Oct 10 23:19:54 np0005480824 podman[82167]: 2025-10-11 03:19:54.368188954 +0000 UTC m=+0.248430664 container remove 967fc2792b32107534a76f15c62fd0722b0e7cbad47dc5b582cb5643ea02ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 10 23:19:54 np0005480824 systemd[1]: libpod-conmon-967fc2792b32107534a76f15c62fd0722b0e7cbad47dc5b582cb5643ea02ff33.scope: Deactivated successfully.
Oct 10 23:19:54 np0005480824 systemd[1]: Reloading.
Oct 10 23:19:54 np0005480824 ceph-mon[74326]: Deploying daemon crash.compute-0 on compute-0
Oct 10 23:19:54 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:19:54 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:19:54 np0005480824 systemd[1]: Reloading.
Oct 10 23:19:54 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:19:54 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:19:54 np0005480824 python3[82263]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:19:55 np0005480824 podman[82302]: 2025-10-11 03:19:55.060901049 +0000 UTC m=+0.100270416 container create 626f83b4ab78b87960d08216e6ec45e87b6b14e593839d6254149c8d0444f831 (image=quay.io/ceph/ceph:v18, name=gifted_ellis, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:19:55 np0005480824 podman[82302]: 2025-10-11 03:19:55.005074114 +0000 UTC m=+0.044443531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:55 np0005480824 systemd[1]: Started libpod-conmon-626f83b4ab78b87960d08216e6ec45e87b6b14e593839d6254149c8d0444f831.scope.
Oct 10 23:19:55 np0005480824 systemd[1]: Starting Ceph crash.compute-0 for 92cfe4d4-4917-5be1-9d00-73758793a62b...
Oct 10 23:19:55 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:55 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd2fea57bbac57cb1ca4d771b25eea15aa36d74af15006c6ade8384fed48ab2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:55 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd2fea57bbac57cb1ca4d771b25eea15aa36d74af15006c6ade8384fed48ab2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:55 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd2fea57bbac57cb1ca4d771b25eea15aa36d74af15006c6ade8384fed48ab2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:55 np0005480824 podman[82302]: 2025-10-11 03:19:55.19714258 +0000 UTC m=+0.236511967 container init 626f83b4ab78b87960d08216e6ec45e87b6b14e593839d6254149c8d0444f831 (image=quay.io/ceph/ceph:v18, name=gifted_ellis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 10 23:19:55 np0005480824 podman[82302]: 2025-10-11 03:19:55.215023431 +0000 UTC m=+0.254392798 container start 626f83b4ab78b87960d08216e6ec45e87b6b14e593839d6254149c8d0444f831 (image=quay.io/ceph/ceph:v18, name=gifted_ellis, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:19:55 np0005480824 podman[82302]: 2025-10-11 03:19:55.223561216 +0000 UTC m=+0.262930613 container attach 626f83b4ab78b87960d08216e6ec45e87b6b14e593839d6254149c8d0444f831 (image=quay.io/ceph/ceph:v18, name=gifted_ellis, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 23:19:55 np0005480824 podman[82374]: 2025-10-11 03:19:55.402267551 +0000 UTC m=+0.073312707 container create 90b5b5a031904a9fddf3ed90b02a240d3a0ebaaf670415137274ba4391b5f476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:19:55 np0005480824 podman[82374]: 2025-10-11 03:19:55.357520263 +0000 UTC m=+0.028565459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:19:55 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d198f75a95ab803013afde8e75ee77508ca10e5ba92e21cc8d8e1b484bdd6d39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:55 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d198f75a95ab803013afde8e75ee77508ca10e5ba92e21cc8d8e1b484bdd6d39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:55 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d198f75a95ab803013afde8e75ee77508ca10e5ba92e21cc8d8e1b484bdd6d39/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:55 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d198f75a95ab803013afde8e75ee77508ca10e5ba92e21cc8d8e1b484bdd6d39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:55 np0005480824 podman[82374]: 2025-10-11 03:19:55.523021869 +0000 UTC m=+0.194067015 container init 90b5b5a031904a9fddf3ed90b02a240d3a0ebaaf670415137274ba4391b5f476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:19:55 np0005480824 podman[82374]: 2025-10-11 03:19:55.533107092 +0000 UTC m=+0.204152238 container start 90b5b5a031904a9fddf3ed90b02a240d3a0ebaaf670415137274ba4391b5f476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 10 23:19:55 np0005480824 bash[82374]: 90b5b5a031904a9fddf3ed90b02a240d3a0ebaaf670415137274ba4391b5f476
Oct 10 23:19:55 np0005480824 systemd[1]: Started Ceph crash.compute-0 for 92cfe4d4-4917-5be1-9d00-73758793a62b.
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:19:55 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-crash-compute-0[82390]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 10 23:19:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:55 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev 6655379c-66f0-4556-ad09-198db6f79f10 (Updating crash deployment (+1 -> 1))
Oct 10 23:19:55 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 6655379c-66f0-4556-ad09-198db6f79f10 (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:55 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev a1a7b861-c831-4ba1-9c27-554efef482a0 does not exist
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:55 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev 15b98696-b188-4130-bdd6-c93afdab61af (Updating mgr deployment (+1 -> 2))
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.rhfxom", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rhfxom", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rhfxom", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:19:55 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.rhfxom on compute-0
Oct 10 23:19:55 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.rhfxom on compute-0
Oct 10 23:19:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3663022861' entity='client.admin' 
Oct 10 23:19:55 np0005480824 systemd[1]: libpod-626f83b4ab78b87960d08216e6ec45e87b6b14e593839d6254149c8d0444f831.scope: Deactivated successfully.
Oct 10 23:19:55 np0005480824 podman[82302]: 2025-10-11 03:19:55.953390064 +0000 UTC m=+0.992759431 container died 626f83b4ab78b87960d08216e6ec45e87b6b14e593839d6254149c8d0444f831 (image=quay.io/ceph/ceph:v18, name=gifted_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:19:55 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-crash-compute-0[82390]: 2025-10-11T03:19:55.976+0000 7f08d9b01640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 10 23:19:55 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-crash-compute-0[82390]: 2025-10-11T03:19:55.976+0000 7f08d9b01640 -1 AuthRegistry(0x7f08d4066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 10 23:19:55 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-crash-compute-0[82390]: 2025-10-11T03:19:55.978+0000 7f08d9b01640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 10 23:19:55 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-crash-compute-0[82390]: 2025-10-11T03:19:55.978+0000 7f08d9b01640 -1 AuthRegistry(0x7f08d9b00000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 10 23:19:55 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-crash-compute-0[82390]: 2025-10-11T03:19:55.979+0000 7f08d37fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 10 23:19:55 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-crash-compute-0[82390]: 2025-10-11T03:19:55.979+0000 7f08d9b01640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 10 23:19:55 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-crash-compute-0[82390]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 10 23:19:55 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-crash-compute-0[82390]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 10 23:19:56 np0005480824 systemd[1]: var-lib-containers-storage-overlay-5fd2fea57bbac57cb1ca4d771b25eea15aa36d74af15006c6ade8384fed48ab2-merged.mount: Deactivated successfully.
Oct 10 23:19:56 np0005480824 podman[82302]: 2025-10-11 03:19:56.13091174 +0000 UTC m=+1.170281117 container remove 626f83b4ab78b87960d08216e6ec45e87b6b14e593839d6254149c8d0444f831 (image=quay.io/ceph/ceph:v18, name=gifted_ellis, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 10 23:19:56 np0005480824 systemd[1]: libpod-conmon-626f83b4ab78b87960d08216e6ec45e87b6b14e593839d6254149c8d0444f831.scope: Deactivated successfully.
Oct 10 23:19:56 np0005480824 python3[82566]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:19:56 np0005480824 podman[82605]: 2025-10-11 03:19:56.557460424 +0000 UTC m=+0.056950433 container create d79a6defae4ca5af719ccf9c5e7da6263e4110229b6beb73eaad21d93afc0918 (image=quay.io/ceph/ceph:v18, name=youthful_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:19:56 np0005480824 podman[82613]: 2025-10-11 03:19:56.571900592 +0000 UTC m=+0.051112722 container create 4045180e11afd26e2c6365ba2a423374aeab3577cc154792ea2ad79ad2ab9a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:19:56 np0005480824 systemd[1]: Started libpod-conmon-d79a6defae4ca5af719ccf9c5e7da6263e4110229b6beb73eaad21d93afc0918.scope.
Oct 10 23:19:56 np0005480824 systemd[1]: Started libpod-conmon-4045180e11afd26e2c6365ba2a423374aeab3577cc154792ea2ad79ad2ab9a40.scope.
Oct 10 23:19:56 np0005480824 podman[82605]: 2025-10-11 03:19:56.528816944 +0000 UTC m=+0.028306963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:56 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:56 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:56 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7886649c39687f669022e744271ef5d79e2aa7979a386b0f86c549569b6ee93/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:56 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7886649c39687f669022e744271ef5d79e2aa7979a386b0f86c549569b6ee93/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:56 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7886649c39687f669022e744271ef5d79e2aa7979a386b0f86c549569b6ee93/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:56 np0005480824 podman[82613]: 2025-10-11 03:19:56.54941725 +0000 UTC m=+0.028629410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:19:56 np0005480824 podman[82613]: 2025-10-11 03:19:56.654057591 +0000 UTC m=+0.133269781 container init 4045180e11afd26e2c6365ba2a423374aeab3577cc154792ea2ad79ad2ab9a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ride, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 10 23:19:56 np0005480824 podman[82605]: 2025-10-11 03:19:56.657009682 +0000 UTC m=+0.156499661 container init d79a6defae4ca5af719ccf9c5e7da6263e4110229b6beb73eaad21d93afc0918 (image=quay.io/ceph/ceph:v18, name=youthful_leakey, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:19:56 np0005480824 podman[82605]: 2025-10-11 03:19:56.664759818 +0000 UTC m=+0.164249797 container start d79a6defae4ca5af719ccf9c5e7da6263e4110229b6beb73eaad21d93afc0918 (image=quay.io/ceph/ceph:v18, name=youthful_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:56 np0005480824 podman[82613]: 2025-10-11 03:19:56.665142938 +0000 UTC m=+0.144355078 container start 4045180e11afd26e2c6365ba2a423374aeab3577cc154792ea2ad79ad2ab9a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ride, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 10 23:19:56 np0005480824 optimistic_ride[82640]: 167 167
Oct 10 23:19:56 np0005480824 systemd[1]: libpod-4045180e11afd26e2c6365ba2a423374aeab3577cc154792ea2ad79ad2ab9a40.scope: Deactivated successfully.
Oct 10 23:19:56 np0005480824 podman[82605]: 2025-10-11 03:19:56.672449344 +0000 UTC m=+0.171939343 container attach d79a6defae4ca5af719ccf9c5e7da6263e4110229b6beb73eaad21d93afc0918 (image=quay.io/ceph/ceph:v18, name=youthful_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:19:56 np0005480824 podman[82613]: 2025-10-11 03:19:56.679738519 +0000 UTC m=+0.158950649 container attach 4045180e11afd26e2c6365ba2a423374aeab3577cc154792ea2ad79ad2ab9a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ride, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 23:19:56 np0005480824 podman[82613]: 2025-10-11 03:19:56.680359734 +0000 UTC m=+0.159571864 container died 4045180e11afd26e2c6365ba2a423374aeab3577cc154792ea2ad79ad2ab9a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:56 np0005480824 systemd[1]: var-lib-containers-storage-overlay-bca6a934d99a70c0f6fb5ea787d74ff9495b48bfe168fdd12e259b751d4fee02-merged.mount: Deactivated successfully.
Oct 10 23:19:56 np0005480824 podman[82613]: 2025-10-11 03:19:56.722237022 +0000 UTC m=+0.201449162 container remove 4045180e11afd26e2c6365ba2a423374aeab3577cc154792ea2ad79ad2ab9a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ride, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:19:56 np0005480824 systemd[1]: libpod-conmon-4045180e11afd26e2c6365ba2a423374aeab3577cc154792ea2ad79ad2ab9a40.scope: Deactivated successfully.
Oct 10 23:19:56 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:56 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:56 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:56 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:56 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:56 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rhfxom", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 23:19:56 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rhfxom", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 10 23:19:56 np0005480824 ceph-mon[74326]: Deploying daemon mgr.compute-0.rhfxom on compute-0
Oct 10 23:19:56 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3663022861' entity='client.admin' 
Oct 10 23:19:56 np0005480824 systemd[1]: Reloading.
Oct 10 23:19:56 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:19:56 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:19:57 np0005480824 systemd[1]: Reloading.
Oct 10 23:19:57 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:19:57 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:19:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct 10 23:19:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1987440359' entity='client.admin' 
Oct 10 23:19:57 np0005480824 podman[82605]: 2025-10-11 03:19:57.316234149 +0000 UTC m=+0.815724208 container died d79a6defae4ca5af719ccf9c5e7da6263e4110229b6beb73eaad21d93afc0918 (image=quay.io/ceph/ceph:v18, name=youthful_leakey, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:19:57 np0005480824 systemd[1]: libpod-d79a6defae4ca5af719ccf9c5e7da6263e4110229b6beb73eaad21d93afc0918.scope: Deactivated successfully.
Oct 10 23:19:57 np0005480824 systemd[1]: var-lib-containers-storage-overlay-f7886649c39687f669022e744271ef5d79e2aa7979a386b0f86c549569b6ee93-merged.mount: Deactivated successfully.
Oct 10 23:19:57 np0005480824 systemd[1]: Starting Ceph mgr.compute-0.rhfxom for 92cfe4d4-4917-5be1-9d00-73758793a62b...
Oct 10 23:19:57 np0005480824 podman[82605]: 2025-10-11 03:19:57.498861078 +0000 UTC m=+0.998351077 container remove d79a6defae4ca5af719ccf9c5e7da6263e4110229b6beb73eaad21d93afc0918 (image=quay.io/ceph/ceph:v18, name=youthful_leakey, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:19:57 np0005480824 systemd[1]: libpod-conmon-d79a6defae4ca5af719ccf9c5e7da6263e4110229b6beb73eaad21d93afc0918.scope: Deactivated successfully.
Oct 10 23:19:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:19:57 np0005480824 podman[82844]: 2025-10-11 03:19:57.840457535 +0000 UTC m=+0.082758044 container create 324816bc16369285e58a9e405e933457bbfb1bfaf0125468bcdf105533c62b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-rhfxom, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 23:19:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:19:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:19:57 np0005480824 ceph-mgr[74617]: [progress INFO root] Writing back 1 completed events
Oct 10 23:19:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 10 23:19:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:19:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:19:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:19:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:19:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:57 np0005480824 podman[82844]: 2025-10-11 03:19:57.810031532 +0000 UTC m=+0.052332111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:19:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd5306c498a43c1f03a3ca81128a669b976eaf696e992aad42c3bd06b1f1942/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd5306c498a43c1f03a3ca81128a669b976eaf696e992aad42c3bd06b1f1942/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd5306c498a43c1f03a3ca81128a669b976eaf696e992aad42c3bd06b1f1942/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd5306c498a43c1f03a3ca81128a669b976eaf696e992aad42c3bd06b1f1942/merged/var/lib/ceph/mgr/ceph-compute-0.rhfxom supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:57 np0005480824 podman[82844]: 2025-10-11 03:19:57.926277832 +0000 UTC m=+0.168578371 container init 324816bc16369285e58a9e405e933457bbfb1bfaf0125468bcdf105533c62b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-rhfxom, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 10 23:19:57 np0005480824 podman[82844]: 2025-10-11 03:19:57.93281798 +0000 UTC m=+0.175118489 container start 324816bc16369285e58a9e405e933457bbfb1bfaf0125468bcdf105533c62b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-rhfxom, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:19:57 np0005480824 bash[82844]: 324816bc16369285e58a9e405e933457bbfb1bfaf0125468bcdf105533c62b06
Oct 10 23:19:57 np0005480824 python3[82854]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:19:57 np0005480824 systemd[1]: Started Ceph mgr.compute-0.rhfxom for 92cfe4d4-4917-5be1-9d00-73758793a62b.
Oct 10 23:19:57 np0005480824 ceph-mgr[82868]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 23:19:57 np0005480824 ceph-mgr[82868]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 10 23:19:57 np0005480824 ceph-mgr[82868]: pidfile_write: ignore empty --pid-file
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:19:58 np0005480824 podman[82869]: 2025-10-11 03:19:58.026510806 +0000 UTC m=+0.051187804 container create afe97d4b02b453ecf0855c7aef53b799a46da5fc440da9ffe727e4d91883bf3f (image=quay.io/ceph/ceph:v18, name=nifty_visvesvaraya, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:58 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev 15b98696-b188-4130-bdd6-c93afdab61af (Updating mgr deployment (+1 -> 2))
Oct 10 23:19:58 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 15b98696-b188-4130-bdd6-c93afdab61af (Updating mgr deployment (+1 -> 2)) in 2 seconds
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:58 np0005480824 systemd[1]: Started libpod-conmon-afe97d4b02b453ecf0855c7aef53b799a46da5fc440da9ffe727e4d91883bf3f.scope.
Oct 10 23:19:58 np0005480824 podman[82869]: 2025-10-11 03:19:58.004351042 +0000 UTC m=+0.029028120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:19:58 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:19:58 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0104fd17fa81c59712526786785effe2adb56f553aba4df3dea573f997292cb2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:58 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0104fd17fa81c59712526786785effe2adb56f553aba4df3dea573f997292cb2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:58 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0104fd17fa81c59712526786785effe2adb56f553aba4df3dea573f997292cb2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:19:58 np0005480824 ceph-mgr[82868]: mgr[py] Loading python module 'alerts'
Oct 10 23:19:58 np0005480824 podman[82869]: 2025-10-11 03:19:58.12714229 +0000 UTC m=+0.151819308 container init afe97d4b02b453ecf0855c7aef53b799a46da5fc440da9ffe727e4d91883bf3f (image=quay.io/ceph/ceph:v18, name=nifty_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:19:58 np0005480824 podman[82869]: 2025-10-11 03:19:58.14248555 +0000 UTC m=+0.167162548 container start afe97d4b02b453ecf0855c7aef53b799a46da5fc440da9ffe727e4d91883bf3f (image=quay.io/ceph/ceph:v18, name=nifty_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:19:58 np0005480824 podman[82869]: 2025-10-11 03:19:58.145877531 +0000 UTC m=+0.170554549 container attach afe97d4b02b453ecf0855c7aef53b799a46da5fc440da9ffe727e4d91883bf3f (image=quay.io/ceph/ceph:v18, name=nifty_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1987440359' entity='client.admin' 
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:58 np0005480824 ceph-mgr[82868]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 23:19:58 np0005480824 ceph-mgr[82868]: mgr[py] Loading python module 'balancer'
Oct 10 23:19:58 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-rhfxom[82864]: 2025-10-11T03:19:58.426+0000 7f10a812a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:19:58 np0005480824 ceph-mgr[82868]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 23:19:58 np0005480824 ceph-mgr[82868]: mgr[py] Loading python module 'cephadm'
Oct 10 23:19:58 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-rhfxom[82864]: 2025-10-11T03:19:58.684+0000 7f10a812a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct 10 23:19:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2252772913' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 10 23:19:59 np0005480824 podman[83151]: 2025-10-11 03:19:59.035298893 +0000 UTC m=+0.063847318 container exec a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:19:59 np0005480824 podman[83151]: 2025-10-11 03:19:59.134489232 +0000 UTC m=+0.163037667 container exec_died a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/2252772913' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2252772913' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct 10 23:19:59 np0005480824 nifty_visvesvaraya[82908]: set require_min_compat_client to mimic
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct 10 23:19:59 np0005480824 systemd[1]: libpod-afe97d4b02b453ecf0855c7aef53b799a46da5fc440da9ffe727e4d91883bf3f.scope: Deactivated successfully.
Oct 10 23:19:59 np0005480824 podman[83216]: 2025-10-11 03:19:59.37976392 +0000 UTC m=+0.032595746 container died afe97d4b02b453ecf0855c7aef53b799a46da5fc440da9ffe727e4d91883bf3f (image=quay.io/ceph/ceph:v18, name=nifty_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 10 23:19:59 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0104fd17fa81c59712526786785effe2adb56f553aba4df3dea573f997292cb2-merged.mount: Deactivated successfully.
Oct 10 23:19:59 np0005480824 podman[83216]: 2025-10-11 03:19:59.43083507 +0000 UTC m=+0.083666806 container remove afe97d4b02b453ecf0855c7aef53b799a46da5fc440da9ffe727e4d91883bf3f (image=quay.io/ceph/ceph:v18, name=nifty_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 23:19:59 np0005480824 systemd[1]: libpod-conmon-afe97d4b02b453ecf0855c7aef53b799a46da5fc440da9ffe727e4d91883bf3f.scope: Deactivated successfully.
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:59 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev b55d8011-a94d-4d04-874d-06382abbf047 does not exist
Oct 10 23:19:59 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 16f300a1-9b42-49af-ba41-de06f258a3ee does not exist
Oct 10 23:19:59 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 24d6ce9e-af4e-4142-ad8d-2772a82f9225 does not exist
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:19:59 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct 10 23:19:59 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:19:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:19:59 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 10 23:19:59 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 10 23:19:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:00 np0005480824 python3[83396]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:20:00 np0005480824 podman[83432]: 2025-10-11 03:20:00.16076376 +0000 UTC m=+0.039823860 container create cd5b7ded6ce707afff01342d96271dd4241b2a61fff21a7356a965fb58b9ae8d (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:20:00 np0005480824 systemd[1]: Started libpod-conmon-cd5b7ded6ce707afff01342d96271dd4241b2a61fff21a7356a965fb58b9ae8d.scope.
Oct 10 23:20:00 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:00 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecf65e86aa79e9e5f8ac65c4ad0a129dc0a0ccf5a033c45bec4d0a4561c06d2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:00 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecf65e86aa79e9e5f8ac65c4ad0a129dc0a0ccf5a033c45bec4d0a4561c06d2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:00 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecf65e86aa79e9e5f8ac65c4ad0a129dc0a0ccf5a033c45bec4d0a4561c06d2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:00 np0005480824 podman[83432]: 2025-10-11 03:20:00.229059875 +0000 UTC m=+0.108119985 container init cd5b7ded6ce707afff01342d96271dd4241b2a61fff21a7356a965fb58b9ae8d (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:00 np0005480824 podman[83432]: 2025-10-11 03:20:00.234845024 +0000 UTC m=+0.113905124 container start cd5b7ded6ce707afff01342d96271dd4241b2a61fff21a7356a965fb58b9ae8d (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:00 np0005480824 podman[83432]: 2025-10-11 03:20:00.141558788 +0000 UTC m=+0.020618918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:20:00 np0005480824 podman[83432]: 2025-10-11 03:20:00.238533214 +0000 UTC m=+0.117593334 container attach cd5b7ded6ce707afff01342d96271dd4241b2a61fff21a7356a965fb58b9ae8d (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/2252772913' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 23:20:00 np0005480824 podman[83471]: 2025-10-11 03:20:00.317980847 +0000 UTC m=+0.052561967 container create f2f40f624871579884858837efce2c02021f4d345f11c4037fb175cae5a6f4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:20:00 np0005480824 systemd[1]: Started libpod-conmon-f2f40f624871579884858837efce2c02021f4d345f11c4037fb175cae5a6f4ad.scope.
Oct 10 23:20:00 np0005480824 podman[83471]: 2025-10-11 03:20:00.293059867 +0000 UTC m=+0.027640987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:00 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:00 np0005480824 podman[83471]: 2025-10-11 03:20:00.408254031 +0000 UTC m=+0.142835161 container init f2f40f624871579884858837efce2c02021f4d345f11c4037fb175cae5a6f4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lamarr, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 10 23:20:00 np0005480824 podman[83471]: 2025-10-11 03:20:00.414347278 +0000 UTC m=+0.148928358 container start f2f40f624871579884858837efce2c02021f4d345f11c4037fb175cae5a6f4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:00 np0005480824 cranky_lamarr[83486]: 167 167
Oct 10 23:20:00 np0005480824 systemd[1]: libpod-f2f40f624871579884858837efce2c02021f4d345f11c4037fb175cae5a6f4ad.scope: Deactivated successfully.
Oct 10 23:20:00 np0005480824 podman[83471]: 2025-10-11 03:20:00.418416076 +0000 UTC m=+0.152997246 container attach f2f40f624871579884858837efce2c02021f4d345f11c4037fb175cae5a6f4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 10 23:20:00 np0005480824 podman[83471]: 2025-10-11 03:20:00.421515381 +0000 UTC m=+0.156096501 container died f2f40f624871579884858837efce2c02021f4d345f11c4037fb175cae5a6f4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lamarr, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:00 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c22edfbc78ee9b1857d096c72dd56e534e7d4d2de5a728e5f72bcac3b3a956de-merged.mount: Deactivated successfully.
Oct 10 23:20:00 np0005480824 podman[83471]: 2025-10-11 03:20:00.475124112 +0000 UTC m=+0.209705202 container remove f2f40f624871579884858837efce2c02021f4d345f11c4037fb175cae5a6f4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lamarr, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:00 np0005480824 systemd[1]: libpod-conmon-f2f40f624871579884858837efce2c02021f4d345f11c4037fb175cae5a6f4ad.scope: Deactivated successfully.
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:00 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.pdyrua (unknown last config time)...
Oct 10 23:20:00 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.pdyrua (unknown last config time)...
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdyrua", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdyrua", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:20:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:20:00 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.pdyrua on compute-0
Oct 10 23:20:00 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.pdyrua on compute-0
Oct 10 23:20:00 np0005480824 ceph-mgr[82868]: mgr[py] Loading python module 'crash'
Oct 10 23:20:00 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:20:00 np0005480824 ceph-mgr[82868]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 23:20:00 np0005480824 ceph-mgr[82868]: mgr[py] Loading python module 'dashboard'
Oct 10 23:20:00 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-rhfxom[82864]: 2025-10-11T03:20:00.938+0000 7f10a812a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 23:20:01 np0005480824 podman[83739]: 2025-10-11 03:20:01.184709953 +0000 UTC m=+0.056754189 container create 134b434f57276a342899bd757e0c611dafbf41fee674f6ee25b99bae1978a41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:20:01 np0005480824 systemd[1]: Started libpod-conmon-134b434f57276a342899bd757e0c611dafbf41fee674f6ee25b99bae1978a41e.scope.
Oct 10 23:20:01 np0005480824 podman[83739]: 2025-10-11 03:20:01.157940698 +0000 UTC m=+0.029984934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:01 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:01 np0005480824 podman[83739]: 2025-10-11 03:20:01.278499332 +0000 UTC m=+0.150543548 container init 134b434f57276a342899bd757e0c611dafbf41fee674f6ee25b99bae1978a41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banach, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:20:01 np0005480824 podman[83739]: 2025-10-11 03:20:01.286094824 +0000 UTC m=+0.158139020 container start 134b434f57276a342899bd757e0c611dafbf41fee674f6ee25b99bae1978a41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banach, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:20:01 np0005480824 gracious_banach[83763]: 167 167
Oct 10 23:20:01 np0005480824 podman[83739]: 2025-10-11 03:20:01.289028115 +0000 UTC m=+0.161072331 container attach 134b434f57276a342899bd757e0c611dafbf41fee674f6ee25b99bae1978a41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banach, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 10 23:20:01 np0005480824 systemd[1]: libpod-134b434f57276a342899bd757e0c611dafbf41fee674f6ee25b99bae1978a41e.scope: Deactivated successfully.
Oct 10 23:20:01 np0005480824 conmon[83763]: conmon 134b434f57276a342899 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-134b434f57276a342899bd757e0c611dafbf41fee674f6ee25b99bae1978a41e.scope/container/memory.events
Oct 10 23:20:01 np0005480824 podman[83739]: 2025-10-11 03:20:01.290934361 +0000 UTC m=+0.162978567 container died 134b434f57276a342899bd757e0c611dafbf41fee674f6ee25b99bae1978a41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banach, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 23:20:01 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3e4ac2be4969e87a8db8b6cc72c468419a386935d7c061c16ae3167484e1d75c-merged.mount: Deactivated successfully.
Oct 10 23:20:01 np0005480824 podman[83739]: 2025-10-11 03:20:01.351687764 +0000 UTC m=+0.223731960 container remove 134b434f57276a342899bd757e0c611dafbf41fee674f6ee25b99bae1978a41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:20:01 np0005480824 systemd[1]: libpod-conmon-134b434f57276a342899bd757e0c611dafbf41fee674f6ee25b99bae1978a41e.scope: Deactivated successfully.
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Added host compute-0
Oct 10 23:20:01 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:20:01 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Saving service mon spec with placement compute-0
Oct 10 23:20:01 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Oct 10 23:20:01 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct 10 23:20:01 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct 10 23:20:01 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Oct 10 23:20:01 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 stupefied_albattani[83463]: Added host 'compute-0' with addr '192.168.122.100'
Oct 10 23:20:01 np0005480824 stupefied_albattani[83463]: Scheduled mon update...
Oct 10 23:20:01 np0005480824 stupefied_albattani[83463]: Scheduled mgr update...
Oct 10 23:20:01 np0005480824 stupefied_albattani[83463]: Scheduled osd.default_drive_group update...
Oct 10 23:20:01 np0005480824 podman[83432]: 2025-10-11 03:20:01.482069294 +0000 UTC m=+1.361129394 container died cd5b7ded6ce707afff01342d96271dd4241b2a61fff21a7356a965fb58b9ae8d (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:20:01 np0005480824 systemd[1]: libpod-cd5b7ded6ce707afff01342d96271dd4241b2a61fff21a7356a965fb58b9ae8d.scope: Deactivated successfully.
Oct 10 23:20:01 np0005480824 systemd[1]: var-lib-containers-storage-overlay-5ecf65e86aa79e9e5f8ac65c4ad0a129dc0a0ccf5a033c45bec4d0a4561c06d2-merged.mount: Deactivated successfully.
Oct 10 23:20:01 np0005480824 podman[83432]: 2025-10-11 03:20:01.529894406 +0000 UTC m=+1.408954506 container remove cd5b7ded6ce707afff01342d96271dd4241b2a61fff21a7356a965fb58b9ae8d (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 10 23:20:01 np0005480824 systemd[1]: libpod-conmon-cd5b7ded6ce707afff01342d96271dd4241b2a61fff21a7356a965fb58b9ae8d.scope: Deactivated successfully.
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: Reconfiguring mgr.compute-0.pdyrua (unknown last config time)...
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdyrua", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:01 np0005480824 python3[83936]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:20:02 np0005480824 podman[83962]: 2025-10-11 03:20:02.079228328 +0000 UTC m=+0.061904203 container create 9689ca1a950f93158f10817777d9cdce0cbeb240f25c1eea31b193f23f0dbff3 (image=quay.io/ceph/ceph:v18, name=busy_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:20:02 np0005480824 systemd[1]: Started libpod-conmon-9689ca1a950f93158f10817777d9cdce0cbeb240f25c1eea31b193f23f0dbff3.scope.
Oct 10 23:20:02 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc23388ea7acb7995c78782adf64b10b3c55dd3d6f73f6b44bf38d20dd12098/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc23388ea7acb7995c78782adf64b10b3c55dd3d6f73f6b44bf38d20dd12098/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc23388ea7acb7995c78782adf64b10b3c55dd3d6f73f6b44bf38d20dd12098/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:02 np0005480824 podman[83962]: 2025-10-11 03:20:02.145311549 +0000 UTC m=+0.127987404 container init 9689ca1a950f93158f10817777d9cdce0cbeb240f25c1eea31b193f23f0dbff3 (image=quay.io/ceph/ceph:v18, name=busy_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 10 23:20:02 np0005480824 podman[83962]: 2025-10-11 03:20:02.052505053 +0000 UTC m=+0.035180998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:20:02 np0005480824 podman[83962]: 2025-10-11 03:20:02.151261002 +0000 UTC m=+0.133936867 container start 9689ca1a950f93158f10817777d9cdce0cbeb240f25c1eea31b193f23f0dbff3 (image=quay.io/ceph/ceph:v18, name=busy_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 10 23:20:02 np0005480824 podman[83962]: 2025-10-11 03:20:02.15532763 +0000 UTC m=+0.138003485 container attach 9689ca1a950f93158f10817777d9cdce0cbeb240f25c1eea31b193f23f0dbff3 (image=quay.io/ceph/ceph:v18, name=busy_mclaren, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 10 23:20:02 np0005480824 podman[84026]: 2025-10-11 03:20:02.378230089 +0000 UTC m=+0.071875332 container exec a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 10 23:20:02 np0005480824 ceph-mgr[82868]: mgr[py] Loading python module 'devicehealth'
Oct 10 23:20:02 np0005480824 podman[84026]: 2025-10-11 03:20:02.481940386 +0000 UTC m=+0.175585629 container exec_died a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: Reconfiguring daemon mgr.compute-0.pdyrua on compute-0
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: Added host compute-0
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: Saving service mon spec with placement compute-0
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: Saving service mgr spec with placement compute-0
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: Marking host: compute-0 for OSDSpec preview refresh.
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: Saving service osd.default_drive_group spec with placement compute-0
Oct 10 23:20:02 np0005480824 ceph-mgr[82868]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 23:20:02 np0005480824 ceph-mgr[82868]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 23:20:02 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-rhfxom[82864]: 2025-10-11T03:20:02.704+0000 7f10a812a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1271766419' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 23:20:02 np0005480824 busy_mclaren[83991]: 
Oct 10 23:20:02 np0005480824 busy_mclaren[83991]: {"fsid":"92cfe4d4-4917-5be1-9d00-73758793a62b","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":84,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-11T03:18:35.640728+0000","services":{}},"progress_events":{}}
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:02 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 2344251d-471e-42fb-9f24-20948438ce3f does not exist
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:02 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev 5075284b-414a-4176-874e-3fd1eaf4e88f (Updating mgr deployment (-1 -> 1))
Oct 10 23:20:02 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.rhfxom from compute-0 -- ports [8765]
Oct 10 23:20:02 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.rhfxom from compute-0 -- ports [8765]
Oct 10 23:20:02 np0005480824 podman[83962]: 2025-10-11 03:20:02.805133201 +0000 UTC m=+0.787809036 container died 9689ca1a950f93158f10817777d9cdce0cbeb240f25c1eea31b193f23f0dbff3 (image=quay.io/ceph/ceph:v18, name=busy_mclaren, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 10 23:20:02 np0005480824 systemd[1]: libpod-9689ca1a950f93158f10817777d9cdce0cbeb240f25c1eea31b193f23f0dbff3.scope: Deactivated successfully.
Oct 10 23:20:02 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9fc23388ea7acb7995c78782adf64b10b3c55dd3d6f73f6b44bf38d20dd12098-merged.mount: Deactivated successfully.
Oct 10 23:20:02 np0005480824 podman[83962]: 2025-10-11 03:20:02.860987966 +0000 UTC m=+0.843663811 container remove 9689ca1a950f93158f10817777d9cdce0cbeb240f25c1eea31b193f23f0dbff3 (image=quay.io/ceph/ceph:v18, name=busy_mclaren, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 10 23:20:02 np0005480824 ceph-mgr[74617]: [progress INFO root] Writing back 2 completed events
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 10 23:20:02 np0005480824 systemd[1]: libpod-conmon-9689ca1a950f93158f10817777d9cdce0cbeb240f25c1eea31b193f23f0dbff3.scope: Deactivated successfully.
Oct 10 23:20:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:03 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-rhfxom[82864]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 23:20:03 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-rhfxom[82864]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 23:20:03 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-rhfxom[82864]:  from numpy import show_config as show_numpy_config
Oct 10 23:20:03 np0005480824 ceph-mgr[82868]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 23:20:03 np0005480824 ceph-mgr[82868]: mgr[py] Loading python module 'influx'
Oct 10 23:20:03 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-rhfxom[82864]: 2025-10-11T03:20:03.274+0000 7f10a812a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 23:20:03 np0005480824 systemd[1]: Stopping Ceph mgr.compute-0.rhfxom for 92cfe4d4-4917-5be1-9d00-73758793a62b...
Oct 10 23:20:03 np0005480824 podman[84313]: 2025-10-11 03:20:03.564803848 +0000 UTC m=+0.088072862 container died 324816bc16369285e58a9e405e933457bbfb1bfaf0125468bcdf105533c62b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-rhfxom, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:20:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:20:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:03 np0005480824 systemd[1]: var-lib-containers-storage-overlay-dfd5306c498a43c1f03a3ca81128a669b976eaf696e992aad42c3bd06b1f1942-merged.mount: Deactivated successfully.
Oct 10 23:20:03 np0005480824 podman[84313]: 2025-10-11 03:20:03.613106181 +0000 UTC m=+0.136375195 container remove 324816bc16369285e58a9e405e933457bbfb1bfaf0125468bcdf105533c62b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-rhfxom, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:20:03 np0005480824 bash[84313]: ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-rhfxom
Oct 10 23:20:03 np0005480824 systemd[1]: ceph-92cfe4d4-4917-5be1-9d00-73758793a62b@mgr.compute-0.rhfxom.service: Main process exited, code=exited, status=143/n/a
Oct 10 23:20:03 np0005480824 systemd[1]: ceph-92cfe4d4-4917-5be1-9d00-73758793a62b@mgr.compute-0.rhfxom.service: Failed with result 'exit-code'.
Oct 10 23:20:03 np0005480824 systemd[1]: Stopped Ceph mgr.compute-0.rhfxom for 92cfe4d4-4917-5be1-9d00-73758793a62b.
Oct 10 23:20:03 np0005480824 systemd[1]: ceph-92cfe4d4-4917-5be1-9d00-73758793a62b@mgr.compute-0.rhfxom.service: Consumed 6.628s CPU time.
Oct 10 23:20:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:03 np0005480824 systemd[1]: Reloading.
Oct 10 23:20:03 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:20:03 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:20:04 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.rhfxom
Oct 10 23:20:04 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.rhfxom
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.rhfxom"} v 0) v1
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.rhfxom"}]: dispatch
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.rhfxom"}]': finished
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:04 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev 5075284b-414a-4176-874e-3fd1eaf4e88f (Updating mgr deployment (-1 -> 1))
Oct 10 23:20:04 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 5075284b-414a-4176-874e-3fd1eaf4e88f (Updating mgr deployment (-1 -> 1)) in 1 seconds
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:04 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev b5b1ac17-026c-4f85-9f06-a0ab24003857 does not exist
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: Removing daemon mgr.compute-0.rhfxom from compute-0 -- ports [8765]
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.rhfxom"}]: dispatch
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.rhfxom"}]': finished
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:20:04 np0005480824 podman[84546]: 2025-10-11 03:20:04.748752933 +0000 UTC m=+0.054326149 container create bb32300b2c31f565eea50c478b64bd7b44885a2a3378ef9da470702b7fb423f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 23:20:04 np0005480824 systemd[1]: Started libpod-conmon-bb32300b2c31f565eea50c478b64bd7b44885a2a3378ef9da470702b7fb423f4.scope.
Oct 10 23:20:04 np0005480824 podman[84546]: 2025-10-11 03:20:04.721742803 +0000 UTC m=+0.027316069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:04 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:04 np0005480824 podman[84546]: 2025-10-11 03:20:04.851471517 +0000 UTC m=+0.157044803 container init bb32300b2c31f565eea50c478b64bd7b44885a2a3378ef9da470702b7fb423f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:04 np0005480824 podman[84546]: 2025-10-11 03:20:04.86362674 +0000 UTC m=+0.169199956 container start bb32300b2c31f565eea50c478b64bd7b44885a2a3378ef9da470702b7fb423f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 10 23:20:04 np0005480824 podman[84546]: 2025-10-11 03:20:04.868047016 +0000 UTC m=+0.173620282 container attach bb32300b2c31f565eea50c478b64bd7b44885a2a3378ef9da470702b7fb423f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 23:20:04 np0005480824 distracted_villani[84562]: 167 167
Oct 10 23:20:04 np0005480824 systemd[1]: libpod-bb32300b2c31f565eea50c478b64bd7b44885a2a3378ef9da470702b7fb423f4.scope: Deactivated successfully.
Oct 10 23:20:04 np0005480824 podman[84546]: 2025-10-11 03:20:04.87275819 +0000 UTC m=+0.178331406 container died bb32300b2c31f565eea50c478b64bd7b44885a2a3378ef9da470702b7fb423f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:20:04 np0005480824 systemd[1]: var-lib-containers-storage-overlay-fac29fe57c361839764a20afa2d2e07bb7f7157fd71a6d8b6f514e7b48243471-merged.mount: Deactivated successfully.
Oct 10 23:20:04 np0005480824 podman[84546]: 2025-10-11 03:20:04.930426199 +0000 UTC m=+0.235999415 container remove bb32300b2c31f565eea50c478b64bd7b44885a2a3378ef9da470702b7fb423f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 10 23:20:04 np0005480824 systemd[1]: libpod-conmon-bb32300b2c31f565eea50c478b64bd7b44885a2a3378ef9da470702b7fb423f4.scope: Deactivated successfully.
Oct 10 23:20:05 np0005480824 podman[84585]: 2025-10-11 03:20:05.181758032 +0000 UTC m=+0.072915337 container create 605146dc17f1d990c02b964f14cccd7ec973fcb91ba86cd5a45063ca5e75d2e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 10 23:20:05 np0005480824 systemd[1]: Started libpod-conmon-605146dc17f1d990c02b964f14cccd7ec973fcb91ba86cd5a45063ca5e75d2e8.scope.
Oct 10 23:20:05 np0005480824 podman[84585]: 2025-10-11 03:20:05.152362494 +0000 UTC m=+0.043519859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:05 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170b103b08b86a644ccfdfff705a70f930b47e814d91142626dc0b08c617a4c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170b103b08b86a644ccfdfff705a70f930b47e814d91142626dc0b08c617a4c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170b103b08b86a644ccfdfff705a70f930b47e814d91142626dc0b08c617a4c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170b103b08b86a644ccfdfff705a70f930b47e814d91142626dc0b08c617a4c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170b103b08b86a644ccfdfff705a70f930b47e814d91142626dc0b08c617a4c8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:05 np0005480824 podman[84585]: 2025-10-11 03:20:05.312276806 +0000 UTC m=+0.203434161 container init 605146dc17f1d990c02b964f14cccd7ec973fcb91ba86cd5a45063ca5e75d2e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_goldberg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:05 np0005480824 podman[84585]: 2025-10-11 03:20:05.323005994 +0000 UTC m=+0.214163299 container start 605146dc17f1d990c02b964f14cccd7ec973fcb91ba86cd5a45063ca5e75d2e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_goldberg, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:20:05 np0005480824 podman[84585]: 2025-10-11 03:20:05.32777815 +0000 UTC m=+0.218935525 container attach 605146dc17f1d990c02b964f14cccd7ec973fcb91ba86cd5a45063ca5e75d2e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_goldberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 10 23:20:05 np0005480824 ceph-mon[74326]: Removing key for mgr.compute-0.rhfxom
Oct 10 23:20:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:06 np0005480824 mystifying_goldberg[84602]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:20:06 np0005480824 mystifying_goldberg[84602]: --> relative data size: 1.0
Oct 10 23:20:06 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 23:20:06 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1d0d82ce-20ea-470d-959e-f67202028a60
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "1d0d82ce-20ea-470d-959e-f67202028a60"} v 0) v1
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2556293264' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1d0d82ce-20ea-470d-959e-f67202028a60"}]: dispatch
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2556293264' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1d0d82ce-20ea-470d-959e-f67202028a60"}]': finished
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 23:20:07 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 23:20:07 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 23:20:07 np0005480824 lvm[84663]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 23:20:07 np0005480824 lvm[84663]: VG ceph_vg0 finished
Oct 10 23:20:07 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct 10 23:20:07 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 10 23:20:07 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 23:20:07 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 10 23:20:07 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/248132180' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 10 23:20:07 np0005480824 mystifying_goldberg[84602]: stderr: got monmap epoch 1
Oct 10 23:20:07 np0005480824 mystifying_goldberg[84602]: --> Creating keyring file for osd.0
Oct 10 23:20:07 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct 10 23:20:07 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct 10 23:20:07 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 1d0d82ce-20ea-470d-959e-f67202028a60 --setuser ceph --setgroup ceph
Oct 10 23:20:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/2556293264' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1d0d82ce-20ea-470d-959e-f67202028a60"}]: dispatch
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/2556293264' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1d0d82ce-20ea-470d-959e-f67202028a60"}]': finished
Oct 10 23:20:07 np0005480824 ceph-mgr[74617]: [progress INFO root] Writing back 3 completed events
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 10 23:20:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:08 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 10 23:20:08 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 10 23:20:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:20:08 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:08 np0005480824 ceph-mon[74326]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 10 23:20:08 np0005480824 ceph-mon[74326]: Cluster is now healthy
Oct 10 23:20:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: stderr: 2025-10-11T03:20:07.842+0000 7f4d26510740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: stderr: 2025-10-11T03:20:07.842+0000 7f4d26510740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: stderr: 2025-10-11T03:20:07.842+0000 7f4d26510740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: stderr: 2025-10-11T03:20:07.842+0000 7f4d26510740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6875119e-c210-4ad1-aca9-6a8084a5ecc8
Oct 10 23:20:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8"} v 0) v1
Oct 10 23:20:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3723239274' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8"}]: dispatch
Oct 10 23:20:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct 10 23:20:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 23:20:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3723239274' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8"}]': finished
Oct 10 23:20:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct 10 23:20:11 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct 10 23:20:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 10 23:20:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 23:20:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 10 23:20:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 23:20:11 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 23:20:11 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 23:20:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 23:20:11 np0005480824 lvm[85614]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 10 23:20:11 np0005480824 lvm[85614]: VG ceph_vg1 finished
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 10 23:20:11 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct 10 23:20:12 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3723239274' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8"}]: dispatch
Oct 10 23:20:12 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3723239274' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8"}]': finished
Oct 10 23:20:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 10 23:20:12 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1526874541' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 10 23:20:12 np0005480824 mystifying_goldberg[84602]: stderr: got monmap epoch 1
Oct 10 23:20:12 np0005480824 mystifying_goldberg[84602]: --> Creating keyring file for osd.1
Oct 10 23:20:12 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct 10 23:20:12 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct 10 23:20:12 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 6875119e-c210-4ad1-aca9-6a8084a5ecc8 --setuser ceph --setgroup ceph
Oct 10 23:20:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:20:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: stderr: 2025-10-11T03:20:12.462+0000 7fd3af957740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: stderr: 2025-10-11T03:20:12.462+0000 7fd3af957740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: stderr: 2025-10-11T03:20:12.462+0000 7fd3af957740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: stderr: 2025-10-11T03:20:12.462+0000 7fd3af957740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e86945e8-6909-4584-9098-cee0dfe9add4
Oct 10 23:20:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "e86945e8-6909-4584-9098-cee0dfe9add4"} v 0) v1
Oct 10 23:20:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1173820336' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e86945e8-6909-4584-9098-cee0dfe9add4"}]: dispatch
Oct 10 23:20:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct 10 23:20:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 23:20:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1173820336' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e86945e8-6909-4584-9098-cee0dfe9add4"}]': finished
Oct 10 23:20:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Oct 10 23:20:15 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Oct 10 23:20:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 10 23:20:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 23:20:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 10 23:20:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 23:20:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:15 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 23:20:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:15 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:15 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 23:20:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:15 np0005480824 lvm[86570]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 10 23:20:15 np0005480824 lvm[86570]: VG ceph_vg2 finished
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 10 23:20:15 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct 10 23:20:16 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1173820336' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e86945e8-6909-4584-9098-cee0dfe9add4"}]: dispatch
Oct 10 23:20:16 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1173820336' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e86945e8-6909-4584-9098-cee0dfe9add4"}]': finished
Oct 10 23:20:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 10 23:20:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2778222643' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 10 23:20:16 np0005480824 mystifying_goldberg[84602]: stderr: got monmap epoch 1
Oct 10 23:20:16 np0005480824 mystifying_goldberg[84602]: --> Creating keyring file for osd.2
Oct 10 23:20:16 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct 10 23:20:16 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct 10 23:20:16 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid e86945e8-6909-4584-9098-cee0dfe9add4 --setuser ceph --setgroup ceph
Oct 10 23:20:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:20:18 np0005480824 mystifying_goldberg[84602]: stderr: 2025-10-11T03:20:16.416+0000 7f9fc2795740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 10 23:20:18 np0005480824 mystifying_goldberg[84602]: stderr: 2025-10-11T03:20:16.416+0000 7f9fc2795740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 10 23:20:18 np0005480824 mystifying_goldberg[84602]: stderr: 2025-10-11T03:20:16.416+0000 7f9fc2795740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 10 23:20:18 np0005480824 mystifying_goldberg[84602]: stderr: 2025-10-11T03:20:16.416+0000 7f9fc2795740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct 10 23:20:18 np0005480824 mystifying_goldberg[84602]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Oct 10 23:20:18 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 10 23:20:18 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct 10 23:20:18 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 10 23:20:18 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct 10 23:20:18 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 10 23:20:18 np0005480824 mystifying_goldberg[84602]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 10 23:20:19 np0005480824 mystifying_goldberg[84602]: --> ceph-volume lvm activate successful for osd ID: 2
Oct 10 23:20:19 np0005480824 mystifying_goldberg[84602]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Oct 10 23:20:19 np0005480824 systemd[1]: libpod-605146dc17f1d990c02b964f14cccd7ec973fcb91ba86cd5a45063ca5e75d2e8.scope: Deactivated successfully.
Oct 10 23:20:19 np0005480824 systemd[1]: libpod-605146dc17f1d990c02b964f14cccd7ec973fcb91ba86cd5a45063ca5e75d2e8.scope: Consumed 6.595s CPU time.
Oct 10 23:20:19 np0005480824 podman[87493]: 2025-10-11 03:20:19.094800371 +0000 UTC m=+0.032567605 container died 605146dc17f1d990c02b964f14cccd7ec973fcb91ba86cd5a45063ca5e75d2e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 10 23:20:19 np0005480824 systemd[1]: var-lib-containers-storage-overlay-170b103b08b86a644ccfdfff705a70f930b47e814d91142626dc0b08c617a4c8-merged.mount: Deactivated successfully.
Oct 10 23:20:19 np0005480824 podman[87493]: 2025-10-11 03:20:19.337293922 +0000 UTC m=+0.275061096 container remove 605146dc17f1d990c02b964f14cccd7ec973fcb91ba86cd5a45063ca5e75d2e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:19 np0005480824 systemd[1]: libpod-conmon-605146dc17f1d990c02b964f14cccd7ec973fcb91ba86cd5a45063ca5e75d2e8.scope: Deactivated successfully.
Oct 10 23:20:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:20 np0005480824 podman[87648]: 2025-10-11 03:20:20.09115912 +0000 UTC m=+0.057754392 container create c4a1e5dedbd936ab233b898c9e549d8f6a537af0c840a15d63b8bf587f9bb120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shamir, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:20 np0005480824 systemd[1]: Started libpod-conmon-c4a1e5dedbd936ab233b898c9e549d8f6a537af0c840a15d63b8bf587f9bb120.scope.
Oct 10 23:20:20 np0005480824 podman[87648]: 2025-10-11 03:20:20.061323091 +0000 UTC m=+0.027918393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:20 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:20 np0005480824 podman[87648]: 2025-10-11 03:20:20.364380231 +0000 UTC m=+0.330975593 container init c4a1e5dedbd936ab233b898c9e549d8f6a537af0c840a15d63b8bf587f9bb120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shamir, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:20 np0005480824 podman[87648]: 2025-10-11 03:20:20.377120698 +0000 UTC m=+0.343715960 container start c4a1e5dedbd936ab233b898c9e549d8f6a537af0c840a15d63b8bf587f9bb120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shamir, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:20 np0005480824 objective_shamir[87664]: 167 167
Oct 10 23:20:20 np0005480824 systemd[1]: libpod-c4a1e5dedbd936ab233b898c9e549d8f6a537af0c840a15d63b8bf587f9bb120.scope: Deactivated successfully.
Oct 10 23:20:20 np0005480824 podman[87648]: 2025-10-11 03:20:20.554869138 +0000 UTC m=+0.521464500 container attach c4a1e5dedbd936ab233b898c9e549d8f6a537af0c840a15d63b8bf587f9bb120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 23:20:20 np0005480824 podman[87648]: 2025-10-11 03:20:20.556285332 +0000 UTC m=+0.522880644 container died c4a1e5dedbd936ab233b898c9e549d8f6a537af0c840a15d63b8bf587f9bb120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shamir, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:20:20 np0005480824 systemd[1]: var-lib-containers-storage-overlay-1c4de43a83e38ec9af134aabcd4bef1092aa627812aaec3053b7db6f5e248a74-merged.mount: Deactivated successfully.
Oct 10 23:20:20 np0005480824 podman[87648]: 2025-10-11 03:20:20.673365562 +0000 UTC m=+0.639960844 container remove c4a1e5dedbd936ab233b898c9e549d8f6a537af0c840a15d63b8bf587f9bb120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shamir, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:20 np0005480824 systemd[1]: libpod-conmon-c4a1e5dedbd936ab233b898c9e549d8f6a537af0c840a15d63b8bf587f9bb120.scope: Deactivated successfully.
Oct 10 23:20:20 np0005480824 podman[87689]: 2025-10-11 03:20:20.917782639 +0000 UTC m=+0.076644357 container create a678de43c447f7d49b5d81fcef3fcfc200815c6809dc9053e0bbb4e3303e566b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 10 23:20:20 np0005480824 systemd[1]: Started libpod-conmon-a678de43c447f7d49b5d81fcef3fcfc200815c6809dc9053e0bbb4e3303e566b.scope.
Oct 10 23:20:20 np0005480824 podman[87689]: 2025-10-11 03:20:20.888671687 +0000 UTC m=+0.047533455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:21 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cdf07bf000595a1e540ef7710df8624a68bbc6ad382097308c327dde7597439/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cdf07bf000595a1e540ef7710df8624a68bbc6ad382097308c327dde7597439/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cdf07bf000595a1e540ef7710df8624a68bbc6ad382097308c327dde7597439/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cdf07bf000595a1e540ef7710df8624a68bbc6ad382097308c327dde7597439/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:21 np0005480824 podman[87689]: 2025-10-11 03:20:21.042454632 +0000 UTC m=+0.201316380 container init a678de43c447f7d49b5d81fcef3fcfc200815c6809dc9053e0bbb4e3303e566b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 10 23:20:21 np0005480824 podman[87689]: 2025-10-11 03:20:21.055407724 +0000 UTC m=+0.214269402 container start a678de43c447f7d49b5d81fcef3fcfc200815c6809dc9053e0bbb4e3303e566b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 10 23:20:21 np0005480824 podman[87689]: 2025-10-11 03:20:21.060346073 +0000 UTC m=+0.219207791 container attach a678de43c447f7d49b5d81fcef3fcfc200815c6809dc9053e0bbb4e3303e566b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]: {
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:    "0": [
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:        {
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "devices": [
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "/dev/loop3"
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            ],
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_name": "ceph_lv0",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_size": "21470642176",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "name": "ceph_lv0",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "tags": {
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.cluster_name": "ceph",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.crush_device_class": "",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.encrypted": "0",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.osd_id": "0",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.type": "block",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.vdo": "0"
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            },
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "type": "block",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "vg_name": "ceph_vg0"
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:        }
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:    ],
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:    "1": [
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:        {
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "devices": [
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "/dev/loop4"
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            ],
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_name": "ceph_lv1",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_size": "21470642176",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "name": "ceph_lv1",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "tags": {
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.cluster_name": "ceph",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.crush_device_class": "",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.encrypted": "0",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.osd_id": "1",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.type": "block",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.vdo": "0"
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            },
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "type": "block",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "vg_name": "ceph_vg1"
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:        }
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:    ],
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:    "2": [
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:        {
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "devices": [
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "/dev/loop5"
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            ],
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_name": "ceph_lv2",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_size": "21470642176",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "name": "ceph_lv2",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "tags": {
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.cluster_name": "ceph",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.crush_device_class": "",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.encrypted": "0",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.osd_id": "2",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.type": "block",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:                "ceph.vdo": "0"
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            },
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "type": "block",
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:            "vg_name": "ceph_vg2"
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:        }
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]:    ]
Oct 10 23:20:21 np0005480824 bold_blackwell[87705]: }
Oct 10 23:20:21 np0005480824 systemd[1]: libpod-a678de43c447f7d49b5d81fcef3fcfc200815c6809dc9053e0bbb4e3303e566b.scope: Deactivated successfully.
Oct 10 23:20:21 np0005480824 podman[87689]: 2025-10-11 03:20:21.906181285 +0000 UTC m=+1.065042963 container died a678de43c447f7d49b5d81fcef3fcfc200815c6809dc9053e0bbb4e3303e566b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:20:21 np0005480824 systemd[1]: var-lib-containers-storage-overlay-7cdf07bf000595a1e540ef7710df8624a68bbc6ad382097308c327dde7597439-merged.mount: Deactivated successfully.
Oct 10 23:20:21 np0005480824 podman[87689]: 2025-10-11 03:20:21.97114514 +0000 UTC m=+1.130006848 container remove a678de43c447f7d49b5d81fcef3fcfc200815c6809dc9053e0bbb4e3303e566b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 10 23:20:21 np0005480824 systemd[1]: libpod-conmon-a678de43c447f7d49b5d81fcef3fcfc200815c6809dc9053e0bbb4e3303e566b.scope: Deactivated successfully.
Oct 10 23:20:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct 10 23:20:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 10 23:20:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:20:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:20:22 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct 10 23:20:22 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct 10 23:20:22 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 10 23:20:22 np0005480824 podman[87870]: 2025-10-11 03:20:22.820304322 +0000 UTC m=+0.025531896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:22 np0005480824 podman[87870]: 2025-10-11 03:20:22.931694415 +0000 UTC m=+0.136922019 container create 7bf6aed22bba3e3a3bece41b361fd4a50e6ab10675b577f3626763a1352456f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 23:20:23 np0005480824 systemd[1]: Started libpod-conmon-7bf6aed22bba3e3a3bece41b361fd4a50e6ab10675b577f3626763a1352456f2.scope.
Oct 10 23:20:23 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:23 np0005480824 podman[87870]: 2025-10-11 03:20:23.173208221 +0000 UTC m=+0.378435885 container init 7bf6aed22bba3e3a3bece41b361fd4a50e6ab10675b577f3626763a1352456f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 10 23:20:23 np0005480824 podman[87870]: 2025-10-11 03:20:23.189206167 +0000 UTC m=+0.394433731 container start 7bf6aed22bba3e3a3bece41b361fd4a50e6ab10675b577f3626763a1352456f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 10 23:20:23 np0005480824 frosty_heisenberg[87887]: 167 167
Oct 10 23:20:23 np0005480824 systemd[1]: libpod-7bf6aed22bba3e3a3bece41b361fd4a50e6ab10675b577f3626763a1352456f2.scope: Deactivated successfully.
Oct 10 23:20:23 np0005480824 podman[87870]: 2025-10-11 03:20:23.199885054 +0000 UTC m=+0.405112658 container attach 7bf6aed22bba3e3a3bece41b361fd4a50e6ab10675b577f3626763a1352456f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 10 23:20:23 np0005480824 conmon[87887]: conmon 7bf6aed22bba3e3a3bec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7bf6aed22bba3e3a3bece41b361fd4a50e6ab10675b577f3626763a1352456f2.scope/container/memory.events
Oct 10 23:20:23 np0005480824 podman[87870]: 2025-10-11 03:20:23.201083163 +0000 UTC m=+0.406310737 container died 7bf6aed22bba3e3a3bece41b361fd4a50e6ab10675b577f3626763a1352456f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 10 23:20:23 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a91903ed587b0426191c75f554ad3b171e269ba2167034beb74421e6048ac342-merged.mount: Deactivated successfully.
Oct 10 23:20:23 np0005480824 podman[87870]: 2025-10-11 03:20:23.424628217 +0000 UTC m=+0.629855841 container remove 7bf6aed22bba3e3a3bece41b361fd4a50e6ab10675b577f3626763a1352456f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 23:20:23 np0005480824 systemd[1]: libpod-conmon-7bf6aed22bba3e3a3bece41b361fd4a50e6ab10675b577f3626763a1352456f2.scope: Deactivated successfully.
Oct 10 23:20:23 np0005480824 ceph-mon[74326]: Deploying daemon osd.0 on compute-0
Oct 10 23:20:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:20:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:23 np0005480824 podman[87919]: 2025-10-11 03:20:23.802266472 +0000 UTC m=+0.061282996 container create 7938242999a0f04cda40d3ddbbe29f73b2f7e46fbc41ef84ef947bee7a432f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:23 np0005480824 systemd[1]: Started libpod-conmon-7938242999a0f04cda40d3ddbbe29f73b2f7e46fbc41ef84ef947bee7a432f60.scope.
Oct 10 23:20:23 np0005480824 podman[87919]: 2025-10-11 03:20:23.7759944 +0000 UTC m=+0.035010924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:23 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:23 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f670135cfe97c33be45ac8ed8b8721b5ce1ec49cfc33bd2bea2aef45a0d5829/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:23 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f670135cfe97c33be45ac8ed8b8721b5ce1ec49cfc33bd2bea2aef45a0d5829/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:23 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f670135cfe97c33be45ac8ed8b8721b5ce1ec49cfc33bd2bea2aef45a0d5829/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:23 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f670135cfe97c33be45ac8ed8b8721b5ce1ec49cfc33bd2bea2aef45a0d5829/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:23 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f670135cfe97c33be45ac8ed8b8721b5ce1ec49cfc33bd2bea2aef45a0d5829/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:23 np0005480824 podman[87919]: 2025-10-11 03:20:23.934201971 +0000 UTC m=+0.193218505 container init 7938242999a0f04cda40d3ddbbe29f73b2f7e46fbc41ef84ef947bee7a432f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate-test, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:20:23 np0005480824 podman[87919]: 2025-10-11 03:20:23.949271873 +0000 UTC m=+0.208288367 container start 7938242999a0f04cda40d3ddbbe29f73b2f7e46fbc41ef84ef947bee7a432f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:20:23 np0005480824 podman[87919]: 2025-10-11 03:20:23.962555653 +0000 UTC m=+0.221572147 container attach 7938242999a0f04cda40d3ddbbe29f73b2f7e46fbc41ef84ef947bee7a432f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate-test, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:24 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate-test[87936]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 10 23:20:24 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate-test[87936]:                            [--no-systemd] [--no-tmpfs]
Oct 10 23:20:24 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate-test[87936]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 10 23:20:24 np0005480824 systemd[1]: libpod-7938242999a0f04cda40d3ddbbe29f73b2f7e46fbc41ef84ef947bee7a432f60.scope: Deactivated successfully.
Oct 10 23:20:24 np0005480824 podman[87919]: 2025-10-11 03:20:24.640172194 +0000 UTC m=+0.899188778 container died 7938242999a0f04cda40d3ddbbe29f73b2f7e46fbc41ef84ef947bee7a432f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate-test, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 10 23:20:24 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9f670135cfe97c33be45ac8ed8b8721b5ce1ec49cfc33bd2bea2aef45a0d5829-merged.mount: Deactivated successfully.
Oct 10 23:20:25 np0005480824 podman[87919]: 2025-10-11 03:20:25.409769432 +0000 UTC m=+1.668785946 container remove 7938242999a0f04cda40d3ddbbe29f73b2f7e46fbc41ef84ef947bee7a432f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate-test, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 23:20:25 np0005480824 systemd[1]: libpod-conmon-7938242999a0f04cda40d3ddbbe29f73b2f7e46fbc41ef84ef947bee7a432f60.scope: Deactivated successfully.
Oct 10 23:20:25 np0005480824 systemd[1]: Reloading.
Oct 10 23:20:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:25 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:20:25 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:20:26 np0005480824 systemd[1]: Reloading.
Oct 10 23:20:26 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:20:26 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:20:26 np0005480824 systemd[1]: Starting Ceph osd.0 for 92cfe4d4-4917-5be1-9d00-73758793a62b...
Oct 10 23:20:26 np0005480824 podman[88099]: 2025-10-11 03:20:26.6259156 +0000 UTC m=+0.082965339 container create ce9a4cdb549592501519de6bd9ed67a7a3f9c1857010af7a4c9965bb201f56d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 10 23:20:26 np0005480824 podman[88099]: 2025-10-11 03:20:26.588272253 +0000 UTC m=+0.045322082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:26 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca06de8688c886957d4319db51f100ae6907dbc758a89571b1b9c356664bd97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca06de8688c886957d4319db51f100ae6907dbc758a89571b1b9c356664bd97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca06de8688c886957d4319db51f100ae6907dbc758a89571b1b9c356664bd97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca06de8688c886957d4319db51f100ae6907dbc758a89571b1b9c356664bd97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca06de8688c886957d4319db51f100ae6907dbc758a89571b1b9c356664bd97/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:26 np0005480824 podman[88099]: 2025-10-11 03:20:26.750419449 +0000 UTC m=+0.207469258 container init ce9a4cdb549592501519de6bd9ed67a7a3f9c1857010af7a4c9965bb201f56d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:20:26 np0005480824 podman[88099]: 2025-10-11 03:20:26.76583149 +0000 UTC m=+0.222881269 container start ce9a4cdb549592501519de6bd9ed67a7a3f9c1857010af7a4c9965bb201f56d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:20:26 np0005480824 podman[88099]: 2025-10-11 03:20:26.774502939 +0000 UTC m=+0.231552718 container attach ce9a4cdb549592501519de6bd9ed67a7a3f9c1857010af7a4c9965bb201f56d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:20:27
Oct 10 23:20:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:20:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:20:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] No pools available
Oct 10 23:20:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:20:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:20:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:20:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:20:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:20:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:20:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:20:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:20:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:20:27 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate[88116]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 10 23:20:27 np0005480824 bash[88099]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 10 23:20:27 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate[88116]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 10 23:20:27 np0005480824 bash[88099]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 10 23:20:27 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate[88116]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 10 23:20:27 np0005480824 bash[88099]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 10 23:20:27 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate[88116]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 23:20:27 np0005480824 bash[88099]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 23:20:27 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate[88116]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 10 23:20:27 np0005480824 bash[88099]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 10 23:20:27 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate[88116]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 10 23:20:27 np0005480824 bash[88099]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 10 23:20:27 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate[88116]: --> ceph-volume raw activate successful for osd ID: 0
Oct 10 23:20:27 np0005480824 bash[88099]: --> ceph-volume raw activate successful for osd ID: 0
Oct 10 23:20:27 np0005480824 systemd[1]: libpod-ce9a4cdb549592501519de6bd9ed67a7a3f9c1857010af7a4c9965bb201f56d6.scope: Deactivated successfully.
Oct 10 23:20:27 np0005480824 systemd[1]: libpod-ce9a4cdb549592501519de6bd9ed67a7a3f9c1857010af7a4c9965bb201f56d6.scope: Consumed 1.244s CPU time.
Oct 10 23:20:27 np0005480824 podman[88099]: 2025-10-11 03:20:27.992003053 +0000 UTC m=+1.449052832 container died ce9a4cdb549592501519de6bd9ed67a7a3f9c1857010af7a4c9965bb201f56d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 10 23:20:28 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3ca06de8688c886957d4319db51f100ae6907dbc758a89571b1b9c356664bd97-merged.mount: Deactivated successfully.
Oct 10 23:20:28 np0005480824 podman[88099]: 2025-10-11 03:20:28.072886811 +0000 UTC m=+1.529936570 container remove ce9a4cdb549592501519de6bd9ed67a7a3f9c1857010af7a4c9965bb201f56d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:28 np0005480824 podman[88305]: 2025-10-11 03:20:28.298976696 +0000 UTC m=+0.049804801 container create 47f64e87e5871d7c071a121b83455d21f2bfdfce6b6c64a3ceca78155daa9205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 10 23:20:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dad0688fbe1c621afae846f1fc01dab43648014e323f8e599f101f39177a53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dad0688fbe1c621afae846f1fc01dab43648014e323f8e599f101f39177a53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dad0688fbe1c621afae846f1fc01dab43648014e323f8e599f101f39177a53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dad0688fbe1c621afae846f1fc01dab43648014e323f8e599f101f39177a53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dad0688fbe1c621afae846f1fc01dab43648014e323f8e599f101f39177a53/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:28 np0005480824 podman[88305]: 2025-10-11 03:20:28.278936144 +0000 UTC m=+0.029764269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:28 np0005480824 podman[88305]: 2025-10-11 03:20:28.439768568 +0000 UTC m=+0.190596693 container init 47f64e87e5871d7c071a121b83455d21f2bfdfce6b6c64a3ceca78155daa9205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 23:20:28 np0005480824 podman[88305]: 2025-10-11 03:20:28.445250269 +0000 UTC m=+0.196078384 container start 47f64e87e5871d7c071a121b83455d21f2bfdfce6b6c64a3ceca78155daa9205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 10 23:20:28 np0005480824 bash[88305]: 47f64e87e5871d7c071a121b83455d21f2bfdfce6b6c64a3ceca78155daa9205
Oct 10 23:20:28 np0005480824 ceph-osd[88325]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 23:20:28 np0005480824 ceph-osd[88325]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 10 23:20:28 np0005480824 ceph-osd[88325]: pidfile_write: ignore empty --pid-file
Oct 10 23:20:28 np0005480824 ceph-osd[88325]: bdev(0x55b254e35800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 23:20:28 np0005480824 ceph-osd[88325]: bdev(0x55b254e35800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 23:20:28 np0005480824 ceph-osd[88325]: bdev(0x55b254e35800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:28 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 23:20:28 np0005480824 ceph-osd[88325]: bdev(0x55b255c6d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 23:20:28 np0005480824 ceph-osd[88325]: bdev(0x55b255c6d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 23:20:28 np0005480824 ceph-osd[88325]: bdev(0x55b255c6d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:28 np0005480824 ceph-osd[88325]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 10 23:20:28 np0005480824 ceph-osd[88325]: bdev(0x55b255c6d800 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 23:20:28 np0005480824 systemd[1]: Started Ceph osd.0 for 92cfe4d4-4917-5be1-9d00-73758793a62b.
Oct 10 23:20:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:20:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:20:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:20:28 np0005480824 ceph-osd[88325]: bdev(0x55b254e35800 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 23:20:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct 10 23:20:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 10 23:20:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:20:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:20:28 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct 10 23:20:28 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: load: jerasure load: lrc 
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255ceec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255ceec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255ceec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255ceec00 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255ceec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255ceec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255ceec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255ceec00 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 10 23:20:29 np0005480824 podman[88490]: 2025-10-11 03:20:29.557924419 +0000 UTC m=+0.102582152 container create cfc3aeb2aaac2ce03fc136e07d06104e2c5725babae70d4cc99d34c6d1b63725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_panini, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255ceec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255ceec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255ceec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255cef400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255cef400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255cef400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluefs mount
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluefs mount shared_bdev_used = 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: RocksDB version: 7.9.2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Git sha 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: DB SUMMARY
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: DB Session ID:  NPAFA89TSZK9EDEPXXX5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: CURRENT file:  CURRENT
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                         Options.error_if_exists: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.create_if_missing: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                                     Options.env: 0x55b255cbfc70
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                                Options.info_log: 0x55b254ebc8a0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                              Options.statistics: (nil)
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.use_fsync: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                              Options.db_log_dir: 
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                                 Options.wal_dir: db.wal
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.write_buffer_manager: 0x55b255dc8460
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.unordered_write: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.row_cache: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                              Options.wal_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.two_write_queues: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.wal_compression: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.atomic_flush: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.max_background_jobs: 4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.max_background_compactions: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.max_subcompactions: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.max_open_files: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Compression algorithms supported:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kZSTD supported: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kXpressCompression supported: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kBZip2Compression supported: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kLZ4Compression supported: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kZlibCompression supported: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kSnappyCompression supported: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 podman[88490]: 2025-10-11 03:20:29.491893498 +0000 UTC m=+0.036551291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebc240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebc240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebc240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d999c8a1-d384-4a2d-9028-2b35b7535629
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152829600969, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152829601378, "job": 1, "event": "recovery_finished"}
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: freelist init
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: freelist _read_cfg
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluefs umount
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255cef400 /var/lib/ceph/osd/ceph-0/block) close
Oct 10 23:20:29 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:29 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:29 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 10 23:20:29 np0005480824 systemd[1]: Started libpod-conmon-cfc3aeb2aaac2ce03fc136e07d06104e2c5725babae70d4cc99d34c6d1b63725.scope.
Oct 10 23:20:29 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:29 np0005480824 podman[88490]: 2025-10-11 03:20:29.77845601 +0000 UTC m=+0.323113743 container init cfc3aeb2aaac2ce03fc136e07d06104e2c5725babae70d4cc99d34c6d1b63725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_panini, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 23:20:29 np0005480824 podman[88490]: 2025-10-11 03:20:29.787511268 +0000 UTC m=+0.332168991 container start cfc3aeb2aaac2ce03fc136e07d06104e2c5725babae70d4cc99d34c6d1b63725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_panini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:29 np0005480824 interesting_panini[88700]: 167 167
Oct 10 23:20:29 np0005480824 systemd[1]: libpod-cfc3aeb2aaac2ce03fc136e07d06104e2c5725babae70d4cc99d34c6d1b63725.scope: Deactivated successfully.
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255cef400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255cef400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bdev(0x55b255cef400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluefs mount
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluefs mount shared_bdev_used = 4718592
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: RocksDB version: 7.9.2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Git sha 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: DB SUMMARY
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: DB Session ID:  NPAFA89TSZK9EDEPXXX4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: CURRENT file:  CURRENT
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                         Options.error_if_exists: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.create_if_missing: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                                     Options.env: 0x55b255e703f0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                                Options.info_log: 0x55b254ebc620
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                              Options.statistics: (nil)
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.use_fsync: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                              Options.db_log_dir: 
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                                 Options.wal_dir: db.wal
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.write_buffer_manager: 0x55b255dc8460
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.unordered_write: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.row_cache: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                              Options.wal_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.two_write_queues: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.wal_compression: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.atomic_flush: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.max_background_jobs: 4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.max_background_compactions: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.max_subcompactions: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.max_open_files: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Compression algorithms supported:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kZSTD supported: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kXpressCompression supported: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kBZip2Compression supported: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kLZ4Compression supported: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kZlibCompression supported: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: #011kSnappyCompression supported: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 podman[88490]: 2025-10-11 03:20:29.838914166 +0000 UTC m=+0.383571889 container attach cfc3aeb2aaac2ce03fc136e07d06104e2c5725babae70d4cc99d34c6d1b63725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_panini, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:20:29 np0005480824 podman[88490]: 2025-10-11 03:20:29.840115755 +0000 UTC m=+0.384773488 container died cfc3aeb2aaac2ce03fc136e07d06104e2c5725babae70d4cc99d34c6d1b63725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_panini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebca20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebc380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebc380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b254ebc380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b254ea9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d999c8a1-d384-4a2d-9028-2b35b7535629
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152829858781, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152829892163, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152829, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d999c8a1-d384-4a2d-9028-2b35b7535629", "db_session_id": "NPAFA89TSZK9EDEPXXX4", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152829901276, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152829, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d999c8a1-d384-4a2d-9028-2b35b7535629", "db_session_id": "NPAFA89TSZK9EDEPXXX4", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152829904756, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152829, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d999c8a1-d384-4a2d-9028-2b35b7535629", "db_session_id": "NPAFA89TSZK9EDEPXXX4", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152829906489, "job": 1, "event": "recovery_finished"}
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 10 23:20:29 np0005480824 systemd[1]: var-lib-containers-storage-overlay-8ae788e81ab46ff3acb132296f9918dc0eab57e513ce7bc959fdbc76f41a4972-merged.mount: Deactivated successfully.
Oct 10 23:20:29 np0005480824 podman[88490]: 2025-10-11 03:20:29.929032636 +0000 UTC m=+0.473690359 container remove cfc3aeb2aaac2ce03fc136e07d06104e2c5725babae70d4cc99d34c6d1b63725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b255016000
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: DB pointer 0x55b255db1a00
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b254ea91f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b254ea91f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b254ea91f0#2 capacity: 460.80 MB usag
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: _get_class not permitted to load lua
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: _get_class not permitted to load sdk
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: _get_class not permitted to load test_remote_reads
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: osd.0 0 load_pgs
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: osd.0 0 load_pgs opened 0 pgs
Oct 10 23:20:29 np0005480824 ceph-osd[88325]: osd.0 0 log_to_monitors true
Oct 10 23:20:29 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0[88321]: 2025-10-11T03:20:29.945+0000 7efd4dc58740 -1 osd.0 0 log_to_monitors true
Oct 10 23:20:29 np0005480824 systemd[1]: libpod-conmon-cfc3aeb2aaac2ce03fc136e07d06104e2c5725babae70d4cc99d34c6d1b63725.scope: Deactivated successfully.
Oct 10 23:20:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct 10 23:20:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3904836249,v1:192.168.122.100:6803/3904836249]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 10 23:20:30 np0005480824 podman[88947]: 2025-10-11 03:20:30.182311937 +0000 UTC m=+0.044522424 container create aff2bc89e43039c43893e349e4a694656c98ad8d38b419264e2ed88225eb8494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct 10 23:20:30 np0005480824 systemd[1]: Started libpod-conmon-aff2bc89e43039c43893e349e4a694656c98ad8d38b419264e2ed88225eb8494.scope.
Oct 10 23:20:30 np0005480824 podman[88947]: 2025-10-11 03:20:30.164271682 +0000 UTC m=+0.026482209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:30 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6bd37cae4cacce63d07c56525399b4b07c66cb56ed9e243bf007e0261280d10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6bd37cae4cacce63d07c56525399b4b07c66cb56ed9e243bf007e0261280d10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6bd37cae4cacce63d07c56525399b4b07c66cb56ed9e243bf007e0261280d10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6bd37cae4cacce63d07c56525399b4b07c66cb56ed9e243bf007e0261280d10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6bd37cae4cacce63d07c56525399b4b07c66cb56ed9e243bf007e0261280d10/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:30 np0005480824 podman[88947]: 2025-10-11 03:20:30.279902498 +0000 UTC m=+0.142113005 container init aff2bc89e43039c43893e349e4a694656c98ad8d38b419264e2ed88225eb8494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:30 np0005480824 podman[88947]: 2025-10-11 03:20:30.293506076 +0000 UTC m=+0.155716563 container start aff2bc89e43039c43893e349e4a694656c98ad8d38b419264e2ed88225eb8494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 23:20:30 np0005480824 podman[88947]: 2025-10-11 03:20:30.297512041 +0000 UTC m=+0.159722618 container attach aff2bc89e43039c43893e349e4a694656c98ad8d38b419264e2ed88225eb8494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: Deploying daemon osd.1 on compute-0
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: from='osd.0 [v2:192.168.122.100:6802/3904836249,v1:192.168.122.100:6803/3904836249]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3904836249,v1:192.168.122.100:6803/3904836249]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3904836249,v1:192.168.122.100:6803/3904836249]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 23:20:30 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:30 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 23:20:30 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:30 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 10 23:20:30 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 10 23:20:30 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate-test[88963]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 10 23:20:30 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate-test[88963]:                            [--no-systemd] [--no-tmpfs]
Oct 10 23:20:30 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate-test[88963]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 10 23:20:30 np0005480824 systemd[1]: libpod-aff2bc89e43039c43893e349e4a694656c98ad8d38b419264e2ed88225eb8494.scope: Deactivated successfully.
Oct 10 23:20:30 np0005480824 podman[88947]: 2025-10-11 03:20:30.982655804 +0000 UTC m=+0.844866291 container died aff2bc89e43039c43893e349e4a694656c98ad8d38b419264e2ed88225eb8494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 10 23:20:31 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c6bd37cae4cacce63d07c56525399b4b07c66cb56ed9e243bf007e0261280d10-merged.mount: Deactivated successfully.
Oct 10 23:20:31 np0005480824 podman[88947]: 2025-10-11 03:20:31.034227276 +0000 UTC m=+0.896437763 container remove aff2bc89e43039c43893e349e4a694656c98ad8d38b419264e2ed88225eb8494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 10 23:20:31 np0005480824 systemd[1]: libpod-conmon-aff2bc89e43039c43893e349e4a694656c98ad8d38b419264e2ed88225eb8494.scope: Deactivated successfully.
Oct 10 23:20:31 np0005480824 systemd[1]: Reloading.
Oct 10 23:20:31 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:20:31 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:20:31 np0005480824 systemd[1]: Reloading.
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3904836249,v1:192.168.122.100:6803/3904836249]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Oct 10 23:20:31 np0005480824 ceph-osd[88325]: osd.0 0 done with init, starting boot process
Oct 10 23:20:31 np0005480824 ceph-osd[88325]: osd.0 0 start_boot
Oct 10 23:20:31 np0005480824 ceph-osd[88325]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 10 23:20:31 np0005480824 ceph-osd[88325]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 10 23:20:31 np0005480824 ceph-osd[88325]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 10 23:20:31 np0005480824 ceph-osd[88325]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 10 23:20:31 np0005480824 ceph-osd[88325]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:31 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 23:20:31 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 23:20:31 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: from='osd.0 [v2:192.168.122.100:6802/3904836249,v1:192.168.122.100:6803/3904836249]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: from='osd.0 [v2:192.168.122.100:6802/3904836249,v1:192.168.122.100:6803/3904836249]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 10 23:20:31 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3904836249; not ready for session (expect reconnect)
Oct 10 23:20:31 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:20:31 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 10 23:20:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 23:20:31 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 23:20:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:31 np0005480824 systemd[1]: Starting Ceph osd.1 for 92cfe4d4-4917-5be1-9d00-73758793a62b...
Oct 10 23:20:32 np0005480824 podman[89124]: 2025-10-11 03:20:32.080736351 +0000 UTC m=+0.052903145 container create c7667d46685b08d3b47ac555714b3845da02facba9bffce1ae1aadacb84eb49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 23:20:32 np0005480824 podman[89124]: 2025-10-11 03:20:32.051011405 +0000 UTC m=+0.023178249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:32 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1587c258e8d18f451b2c458ebbcd432e353b7fbde38ef104681a5b5ec4c3f27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1587c258e8d18f451b2c458ebbcd432e353b7fbde38ef104681a5b5ec4c3f27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1587c258e8d18f451b2c458ebbcd432e353b7fbde38ef104681a5b5ec4c3f27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1587c258e8d18f451b2c458ebbcd432e353b7fbde38ef104681a5b5ec4c3f27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1587c258e8d18f451b2c458ebbcd432e353b7fbde38ef104681a5b5ec4c3f27/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:32 np0005480824 podman[89124]: 2025-10-11 03:20:32.212982146 +0000 UTC m=+0.185148960 container init c7667d46685b08d3b47ac555714b3845da02facba9bffce1ae1aadacb84eb49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:20:32 np0005480824 podman[89124]: 2025-10-11 03:20:32.221536783 +0000 UTC m=+0.193703577 container start c7667d46685b08d3b47ac555714b3845da02facba9bffce1ae1aadacb84eb49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:20:32 np0005480824 podman[89124]: 2025-10-11 03:20:32.245870548 +0000 UTC m=+0.218037362 container attach c7667d46685b08d3b47ac555714b3845da02facba9bffce1ae1aadacb84eb49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:32 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3904836249; not ready for session (expect reconnect)
Oct 10 23:20:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 10 23:20:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 23:20:32 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 23:20:32 np0005480824 ceph-mon[74326]: from='osd.0 [v2:192.168.122.100:6802/3904836249,v1:192.168.122.100:6803/3904836249]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 10 23:20:33 np0005480824 python3[89181]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:20:33 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate[89140]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 23:20:33 np0005480824 bash[89124]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 23:20:33 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate[89140]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 10 23:20:33 np0005480824 bash[89124]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 10 23:20:33 np0005480824 podman[89198]: 2025-10-11 03:20:33.260626109 +0000 UTC m=+0.052363732 container create 89c56deaea404055e2ccd39903fe7e55325e47d6b1a9459e9701705a8016a85c (image=quay.io/ceph/ceph:v18, name=sleepy_bohr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:33 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate[89140]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 10 23:20:33 np0005480824 bash[89124]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 10 23:20:33 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate[89140]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 10 23:20:33 np0005480824 bash[89124]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 10 23:20:33 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate[89140]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 10 23:20:33 np0005480824 bash[89124]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 10 23:20:33 np0005480824 systemd[1]: Started libpod-conmon-89c56deaea404055e2ccd39903fe7e55325e47d6b1a9459e9701705a8016a85c.scope.
Oct 10 23:20:33 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate[89140]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 23:20:33 np0005480824 bash[89124]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 23:20:33 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate[89140]: --> ceph-volume raw activate successful for osd ID: 1
Oct 10 23:20:33 np0005480824 bash[89124]: --> ceph-volume raw activate successful for osd ID: 1
Oct 10 23:20:33 np0005480824 podman[89198]: 2025-10-11 03:20:33.234302995 +0000 UTC m=+0.026040698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:20:33 np0005480824 podman[89124]: 2025-10-11 03:20:33.351791995 +0000 UTC m=+1.323958789 container died c7667d46685b08d3b47ac555714b3845da02facba9bffce1ae1aadacb84eb49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:33 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:33 np0005480824 systemd[1]: libpod-c7667d46685b08d3b47ac555714b3845da02facba9bffce1ae1aadacb84eb49a.scope: Deactivated successfully.
Oct 10 23:20:33 np0005480824 systemd[1]: libpod-c7667d46685b08d3b47ac555714b3845da02facba9bffce1ae1aadacb84eb49a.scope: Consumed 1.134s CPU time.
Oct 10 23:20:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137bfb45a39db990ec9ba75d1496ca3289060e7eb0375c8e7382c6d8a5d5fa21/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137bfb45a39db990ec9ba75d1496ca3289060e7eb0375c8e7382c6d8a5d5fa21/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137bfb45a39db990ec9ba75d1496ca3289060e7eb0375c8e7382c6d8a5d5fa21/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:33 np0005480824 podman[89198]: 2025-10-11 03:20:33.396897511 +0000 UTC m=+0.188635184 container init 89c56deaea404055e2ccd39903fe7e55325e47d6b1a9459e9701705a8016a85c (image=quay.io/ceph/ceph:v18, name=sleepy_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 23:20:33 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c1587c258e8d18f451b2c458ebbcd432e353b7fbde38ef104681a5b5ec4c3f27-merged.mount: Deactivated successfully.
Oct 10 23:20:33 np0005480824 podman[89198]: 2025-10-11 03:20:33.405992391 +0000 UTC m=+0.197730014 container start 89c56deaea404055e2ccd39903fe7e55325e47d6b1a9459e9701705a8016a85c (image=quay.io/ceph/ceph:v18, name=sleepy_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 23:20:33 np0005480824 podman[89198]: 2025-10-11 03:20:33.427002337 +0000 UTC m=+0.218740140 container attach 89c56deaea404055e2ccd39903fe7e55325e47d6b1a9459e9701705a8016a85c (image=quay.io/ceph/ceph:v18, name=sleepy_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:20:33 np0005480824 podman[89124]: 2025-10-11 03:20:33.495689101 +0000 UTC m=+1.467855895 container remove c7667d46685b08d3b47ac555714b3845da02facba9bffce1ae1aadacb84eb49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:20:33 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3904836249; not ready for session (expect reconnect)
Oct 10 23:20:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 10 23:20:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 23:20:33 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 23:20:33 np0005480824 podman[89363]: 2025-10-11 03:20:33.739688577 +0000 UTC m=+0.047887234 container create 159562a3a15003ca855365b6b045a9e9a594d967d5f41c2a02d35e99d6a005e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:33 np0005480824 podman[89363]: 2025-10-11 03:20:33.722978825 +0000 UTC m=+0.031177492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1bb59bd4b28a8d28c79be83607365e79659d0ec269b9b1be7322faaf13c1152/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1bb59bd4b28a8d28c79be83607365e79659d0ec269b9b1be7322faaf13c1152/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1bb59bd4b28a8d28c79be83607365e79659d0ec269b9b1be7322faaf13c1152/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1bb59bd4b28a8d28c79be83607365e79659d0ec269b9b1be7322faaf13c1152/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1bb59bd4b28a8d28c79be83607365e79659d0ec269b9b1be7322faaf13c1152/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:33 np0005480824 podman[89363]: 2025-10-11 03:20:33.870248752 +0000 UTC m=+0.178447439 container init 159562a3a15003ca855365b6b045a9e9a594d967d5f41c2a02d35e99d6a005e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:20:33 np0005480824 podman[89363]: 2025-10-11 03:20:33.877870746 +0000 UTC m=+0.186069403 container start 159562a3a15003ca855365b6b045a9e9a594d967d5f41c2a02d35e99d6a005e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:20:33 np0005480824 bash[89363]: 159562a3a15003ca855365b6b045a9e9a594d967d5f41c2a02d35e99d6a005e2
Oct 10 23:20:33 np0005480824 systemd[1]: Started Ceph osd.1 for 92cfe4d4-4917-5be1-9d00-73758793a62b.
Oct 10 23:20:33 np0005480824 ceph-osd[89401]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 23:20:33 np0005480824 ceph-osd[89401]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 10 23:20:33 np0005480824 ceph-osd[89401]: pidfile_write: ignore empty --pid-file
Oct 10 23:20:33 np0005480824 ceph-osd[89401]: bdev(0x55dbdc435800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 23:20:33 np0005480824 ceph-osd[89401]: bdev(0x55dbdc435800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 23:20:33 np0005480824 ceph-osd[89401]: bdev(0x55dbdc435800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:33 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 23:20:33 np0005480824 ceph-osd[89401]: bdev(0x55dbdd26d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 23:20:33 np0005480824 ceph-osd[89401]: bdev(0x55dbdd26d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 23:20:33 np0005480824 ceph-osd[89401]: bdev(0x55dbdd26d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:33 np0005480824 ceph-osd[89401]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 10 23:20:33 np0005480824 ceph-osd[89401]: bdev(0x55dbdd26d800 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 23:20:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:20:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:20:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 10 23:20:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/932048479' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 23:20:34 np0005480824 sleepy_bohr[89305]: 
Oct 10 23:20:34 np0005480824 sleepy_bohr[89305]: {"fsid":"92cfe4d4-4917-5be1-9d00-73758793a62b","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":115,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":8,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1760152815,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-11T03:20:29.789535+0000","services":{}},"progress_events":{}}
Oct 10 23:20:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct 10 23:20:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 10 23:20:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:20:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:20:34 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Oct 10 23:20:34 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Oct 10 23:20:34 np0005480824 systemd[1]: libpod-89c56deaea404055e2ccd39903fe7e55325e47d6b1a9459e9701705a8016a85c.scope: Deactivated successfully.
Oct 10 23:20:34 np0005480824 podman[89198]: 2025-10-11 03:20:34.047221055 +0000 UTC m=+0.838958688 container died 89c56deaea404055e2ccd39903fe7e55325e47d6b1a9459e9701705a8016a85c (image=quay.io/ceph/ceph:v18, name=sleepy_bohr, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:34 np0005480824 systemd[1]: var-lib-containers-storage-overlay-137bfb45a39db990ec9ba75d1496ca3289060e7eb0375c8e7382c6d8a5d5fa21-merged.mount: Deactivated successfully.
Oct 10 23:20:34 np0005480824 podman[89198]: 2025-10-11 03:20:34.173551177 +0000 UTC m=+0.965288870 container remove 89c56deaea404055e2ccd39903fe7e55325e47d6b1a9459e9701705a8016a85c (image=quay.io/ceph/ceph:v18, name=sleepy_bohr, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:20:34 np0005480824 ceph-osd[89401]: bdev(0x55dbdc435800 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 23:20:34 np0005480824 systemd[1]: libpod-conmon-89c56deaea404055e2ccd39903fe7e55325e47d6b1a9459e9701705a8016a85c.scope: Deactivated successfully.
Oct 10 23:20:34 np0005480824 ceph-osd[89401]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct 10 23:20:34 np0005480824 ceph-osd[89401]: load: jerasure load: lrc 
Oct 10 23:20:34 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2eec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 23:20:34 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2eec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 23:20:34 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2eec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:34 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 23:20:34 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2eec00 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 23:20:34 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3904836249; not ready for session (expect reconnect)
Oct 10 23:20:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 10 23:20:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 23:20:34 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 23:20:34 np0005480824 podman[89575]: 2025-10-11 03:20:34.664515192 +0000 UTC m=+0.061705327 container create 40c2fc98d4009e03427a68ab9c2e97587136aa2bdee7caac414150d22404f5f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_golick, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:34 np0005480824 podman[89575]: 2025-10-11 03:20:34.63080883 +0000 UTC m=+0.027998965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:34 np0005480824 systemd[1]: Started libpod-conmon-40c2fc98d4009e03427a68ab9c2e97587136aa2bdee7caac414150d22404f5f9.scope.
Oct 10 23:20:34 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2eec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 23:20:34 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2eec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 23:20:34 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2eec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:34 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 23:20:34 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2eec00 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 23:20:34 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:34 np0005480824 podman[89575]: 2025-10-11 03:20:34.800713213 +0000 UTC m=+0.197903368 container init 40c2fc98d4009e03427a68ab9c2e97587136aa2bdee7caac414150d22404f5f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_golick, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 10 23:20:34 np0005480824 podman[89575]: 2025-10-11 03:20:34.810323414 +0000 UTC m=+0.207513539 container start 40c2fc98d4009e03427a68ab9c2e97587136aa2bdee7caac414150d22404f5f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_golick, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 23:20:34 np0005480824 frosty_golick[89591]: 167 167
Oct 10 23:20:34 np0005480824 systemd[1]: libpod-40c2fc98d4009e03427a68ab9c2e97587136aa2bdee7caac414150d22404f5f9.scope: Deactivated successfully.
Oct 10 23:20:34 np0005480824 podman[89575]: 2025-10-11 03:20:34.834025394 +0000 UTC m=+0.231215549 container attach 40c2fc98d4009e03427a68ab9c2e97587136aa2bdee7caac414150d22404f5f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_golick, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 23:20:34 np0005480824 podman[89575]: 2025-10-11 03:20:34.834488056 +0000 UTC m=+0.231678201 container died 40c2fc98d4009e03427a68ab9c2e97587136aa2bdee7caac414150d22404f5f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_golick, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:34 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0aa9e5f70bc5f5a3899d507bbd80ed65bf6837d62f452e8c6834372fc2c92d27-merged.mount: Deactivated successfully.
Oct 10 23:20:34 np0005480824 podman[89575]: 2025-10-11 03:20:34.947253762 +0000 UTC m=+0.344443937 container remove 40c2fc98d4009e03427a68ab9c2e97587136aa2bdee7caac414150d22404f5f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_golick, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:34 np0005480824 systemd[1]: libpod-conmon-40c2fc98d4009e03427a68ab9c2e97587136aa2bdee7caac414150d22404f5f9.scope: Deactivated successfully.
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 10 23:20:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 10 23:20:35 np0005480824 ceph-mon[74326]: Deploying daemon osd.2 on compute-0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2eec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2eec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2eec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2ef400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2ef400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2ef400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluefs mount
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluefs mount shared_bdev_used = 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: RocksDB version: 7.9.2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Git sha 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: DB SUMMARY
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: DB Session ID:  IAHZU32WD0MU2TF0U8BS
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: CURRENT file:  CURRENT
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                         Options.error_if_exists: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.create_if_missing: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                                     Options.env: 0x55dbdd2bfc70
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                                Options.info_log: 0x55dbdc4bc8a0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                              Options.statistics: (nil)
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.use_fsync: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                              Options.db_log_dir: 
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                                 Options.wal_dir: db.wal
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.write_buffer_manager: 0x55dbdd3c8460
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.unordered_write: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.row_cache: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                              Options.wal_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.two_write_queues: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.wal_compression: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.atomic_flush: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.max_background_jobs: 4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.max_background_compactions: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.max_subcompactions: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.max_open_files: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Compression algorithms supported:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kZSTD supported: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kXpressCompression supported: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kBZip2Compression supported: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kLZ4Compression supported: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kZlibCompression supported: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kSnappyCompression supported: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4bc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4bc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4bc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4bc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4bc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4bc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4bc2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4bc240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4bc240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4bc240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 281211f7-47f1-4b72-a60e-5b066863d33b
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152835060720, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152835061090, "job": 1, "event": "recovery_finished"}
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: freelist init
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: freelist _read_cfg
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluefs umount
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2ef400 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 23:20:35 np0005480824 podman[89824]: 2025-10-11 03:20:35.227744828 +0000 UTC m=+0.055268953 container create 659f6ff6c1819fddd9b7e810331dd6d7c4de51eae1ec676f2ec62daf0ec7eeee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate-test, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:20:35 np0005480824 systemd[1]: Started libpod-conmon-659f6ff6c1819fddd9b7e810331dd6d7c4de51eae1ec676f2ec62daf0ec7eeee.scope.
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2ef400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2ef400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bdev(0x55dbdd2ef400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluefs mount
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 10 23:20:35 np0005480824 podman[89824]: 2025-10-11 03:20:35.200811888 +0000 UTC m=+0.028336053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluefs mount shared_bdev_used = 4718592
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: RocksDB version: 7.9.2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Git sha 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: DB SUMMARY
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: DB Session ID:  IAHZU32WD0MU2TF0U8BT
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: CURRENT file:  CURRENT
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                         Options.error_if_exists: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.create_if_missing: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                                     Options.env: 0x55dbdd482380
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                                Options.info_log: 0x55dbdc4b2b40
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                              Options.statistics: (nil)
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.use_fsync: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                              Options.db_log_dir: 
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                                 Options.wal_dir: db.wal
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.write_buffer_manager: 0x55dbdd3c86e0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.unordered_write: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.row_cache: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                              Options.wal_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.two_write_queues: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.wal_compression: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.atomic_flush: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.max_background_jobs: 4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.max_background_compactions: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.max_subcompactions: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.max_open_files: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Compression algorithms supported:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kZSTD supported: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kXpressCompression supported: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kBZip2Compression supported: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kLZ4Compression supported: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kZlibCompression supported: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: #011kSnappyCompression supported: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4b3180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4b3180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4b3180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4b3180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf3f6916ecc960a9a82a2ecfdc791343e73e1ab5da6c4b707b7177cb96b134d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf3f6916ecc960a9a82a2ecfdc791343e73e1ab5da6c4b707b7177cb96b134d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf3f6916ecc960a9a82a2ecfdc791343e73e1ab5da6c4b707b7177cb96b134d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf3f6916ecc960a9a82a2ecfdc791343e73e1ab5da6c4b707b7177cb96b134d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf3f6916ecc960a9a82a2ecfdc791343e73e1ab5da6c4b707b7177cb96b134d/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4b3180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4b3180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4b3180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4b3120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4b3120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 podman[89824]: 2025-10-11 03:20:35.349459609 +0000 UTC m=+0.176983764 container init 659f6ff6c1819fddd9b7e810331dd6d7c4de51eae1ec676f2ec62daf0ec7eeee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate-test, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdc4b3120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55dbdc4a9090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 281211f7-47f1-4b72-a60e-5b066863d33b
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152835341404, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152835360160, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152835, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "281211f7-47f1-4b72-a60e-5b066863d33b", "db_session_id": "IAHZU32WD0MU2TF0U8BT", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:20:35 np0005480824 podman[89824]: 2025-10-11 03:20:35.36237365 +0000 UTC m=+0.189897775 container start 659f6ff6c1819fddd9b7e810331dd6d7c4de51eae1ec676f2ec62daf0ec7eeee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate-test, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152835363869, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152835, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "281211f7-47f1-4b72-a60e-5b066863d33b", "db_session_id": "IAHZU32WD0MU2TF0U8BT", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:20:35 np0005480824 podman[89824]: 2025-10-11 03:20:35.372188647 +0000 UTC m=+0.199712772 container attach 659f6ff6c1819fddd9b7e810331dd6d7c4de51eae1ec676f2ec62daf0ec7eeee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152835374471, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152835, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "281211f7-47f1-4b72-a60e-5b066863d33b", "db_session_id": "IAHZU32WD0MU2TF0U8BT", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152835377912, "job": 1, "event": "recovery_finished"}
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55dbdc617c00
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: DB pointer 0x55dbdd3b1a00
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55dbdc4a91f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55dbdc4a91f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: _get_class not permitted to load lua
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: _get_class not permitted to load sdk
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: _get_class not permitted to load test_remote_reads
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: osd.1 0 load_pgs
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: osd.1 0 load_pgs opened 0 pgs
Oct 10 23:20:35 np0005480824 ceph-osd[89401]: osd.1 0 log_to_monitors true
Oct 10 23:20:35 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1[89397]: 2025-10-11T03:20:35.443+0000 7f1489df1740 -1 osd.1 0 log_to_monitors true
Oct 10 23:20:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct 10 23:20:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3058790338,v1:192.168.122.100:6807/3058790338]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 10 23:20:35 np0005480824 ceph-osd[88325]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 26.740 iops: 6845.354 elapsed_sec: 0.438
Oct 10 23:20:35 np0005480824 ceph-osd[88325]: log_channel(cluster) log [WRN] : OSD bench result of 6845.353734 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 23:20:35 np0005480824 ceph-osd[88325]: osd.0 0 waiting for initial osdmap
Oct 10 23:20:35 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0[88321]: 2025-10-11T03:20:35.501+0000 7efd4a3ef640 -1 osd.0 0 waiting for initial osdmap
Oct 10 23:20:35 np0005480824 ceph-osd[88325]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 10 23:20:35 np0005480824 ceph-osd[88325]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 10 23:20:35 np0005480824 ceph-osd[88325]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 10 23:20:35 np0005480824 ceph-osd[88325]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Oct 10 23:20:35 np0005480824 ceph-osd[88325]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 10 23:20:35 np0005480824 ceph-osd[88325]: osd.0 8 set_numa_affinity not setting numa affinity
Oct 10 23:20:35 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-0[88321]: 2025-10-11T03:20:35.524+0000 7efd45200640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 10 23:20:35 np0005480824 ceph-osd[88325]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct 10 23:20:35 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3904836249; not ready for session (expect reconnect)
Oct 10 23:20:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 10 23:20:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 23:20:35 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 10 23:20:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 23:20:35 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate-test[89841]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 10 23:20:35 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate-test[89841]:                            [--no-systemd] [--no-tmpfs]
Oct 10 23:20:35 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate-test[89841]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 10 23:20:35 np0005480824 systemd[1]: libpod-659f6ff6c1819fddd9b7e810331dd6d7c4de51eae1ec676f2ec62daf0ec7eeee.scope: Deactivated successfully.
Oct 10 23:20:35 np0005480824 podman[89824]: 2025-10-11 03:20:35.988121842 +0000 UTC m=+0.815646017 container died 659f6ff6c1819fddd9b7e810331dd6d7c4de51eae1ec676f2ec62daf0ec7eeee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate-test, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: from='osd.1 [v2:192.168.122.100:6806/3058790338,v1:192.168.122.100:6807/3058790338]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: OSD bench result of 6845.353734 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 23:20:36 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9bf3f6916ecc960a9a82a2ecfdc791343e73e1ab5da6c4b707b7177cb96b134d-merged.mount: Deactivated successfully.
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3058790338,v1:192.168.122.100:6807/3058790338]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3904836249,v1:192.168.122.100:6803/3904836249] boot
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3058790338,v1:192.168.122.100:6807/3058790338]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:36 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 23:20:36 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:36 np0005480824 ceph-osd[88325]: osd.0 9 state: booting -> active
Oct 10 23:20:36 np0005480824 podman[89824]: 2025-10-11 03:20:36.053471195 +0000 UTC m=+0.880995320 container remove 659f6ff6c1819fddd9b7e810331dd6d7c4de51eae1ec676f2ec62daf0ec7eeee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate-test, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 10 23:20:36 np0005480824 systemd[1]: libpod-conmon-659f6ff6c1819fddd9b7e810331dd6d7c4de51eae1ec676f2ec62daf0ec7eeee.scope: Deactivated successfully.
Oct 10 23:20:36 np0005480824 systemd[1]: Reloading.
Oct 10 23:20:36 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:20:36 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:20:36 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 10 23:20:36 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 10 23:20:36 np0005480824 systemd[1]: Reloading.
Oct 10 23:20:36 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:20:36 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3058790338,v1:192.168.122.100:6807/3058790338]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Oct 10 23:20:37 np0005480824 ceph-osd[89401]: osd.1 0 done with init, starting boot process
Oct 10 23:20:37 np0005480824 ceph-osd[89401]: osd.1 0 start_boot
Oct 10 23:20:37 np0005480824 ceph-osd[89401]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 10 23:20:37 np0005480824 ceph-osd[89401]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 10 23:20:37 np0005480824 ceph-osd[89401]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 10 23:20:37 np0005480824 ceph-osd[89401]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 10 23:20:37 np0005480824 ceph-osd[89401]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Oct 10 23:20:37 np0005480824 systemd[1]: Starting Ceph osd.2 for 92cfe4d4-4917-5be1-9d00-73758793a62b...
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:37 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 23:20:37 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: from='osd.1 [v2:192.168.122.100:6806/3058790338,v1:192.168.122.100:6807/3058790338]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: osd.0 [v2:192.168.122.100:6802/3904836249,v1:192.168.122.100:6803/3904836249] boot
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: from='osd.1 [v2:192.168.122.100:6806/3058790338,v1:192.168.122.100:6807/3058790338]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 10 23:20:37 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3058790338; not ready for session (expect reconnect)
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 23:20:37 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 23:20:37 np0005480824 podman[90217]: 2025-10-11 03:20:37.355370902 +0000 UTC m=+0.076994306 container create 809ed17dee576f81b5c62b469b40277f32cf5afedb2fa8ca62463598c431389a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 10 23:20:37 np0005480824 podman[90217]: 2025-10-11 03:20:37.305470531 +0000 UTC m=+0.027093995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:37 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d94ca3bb948c2c13134182223a823c6f2478370bb3b6203f714f7a336e10dbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d94ca3bb948c2c13134182223a823c6f2478370bb3b6203f714f7a336e10dbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d94ca3bb948c2c13134182223a823c6f2478370bb3b6203f714f7a336e10dbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d94ca3bb948c2c13134182223a823c6f2478370bb3b6203f714f7a336e10dbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d94ca3bb948c2c13134182223a823c6f2478370bb3b6203f714f7a336e10dbd/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:37 np0005480824 podman[90217]: 2025-10-11 03:20:37.456814266 +0000 UTC m=+0.178437720 container init 809ed17dee576f81b5c62b469b40277f32cf5afedb2fa8ca62463598c431389a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:20:37 np0005480824 podman[90217]: 2025-10-11 03:20:37.4624013 +0000 UTC m=+0.184024674 container start 809ed17dee576f81b5c62b469b40277f32cf5afedb2fa8ca62463598c431389a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:37 np0005480824 podman[90217]: 2025-10-11 03:20:37.478714812 +0000 UTC m=+0.200338286 container attach 809ed17dee576f81b5c62b469b40277f32cf5afedb2fa8ca62463598c431389a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 10 23:20:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 10 23:20:37 np0005480824 ceph-mgr[74617]: [devicehealth INFO root] creating mgr pool
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct 10 23:20:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 10 23:20:38 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3058790338; not ready for session (expect reconnect)
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 23:20:38 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: from='osd.1 [v2:192.168.122.100:6806/3058790338,v1:192.168.122.100:6807/3058790338]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:38 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 23:20:38 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 10 23:20:38 np0005480824 ceph-osd[88325]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 10 23:20:38 np0005480824 ceph-osd[88325]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 10 23:20:38 np0005480824 ceph-osd[88325]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 10 23:20:38 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate[90232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 10 23:20:38 np0005480824 bash[90217]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 10 23:20:38 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate[90232]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 10 23:20:38 np0005480824 bash[90217]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 10 23:20:38 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate[90232]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 10 23:20:38 np0005480824 bash[90217]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 10 23:20:38 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate[90232]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 10 23:20:38 np0005480824 bash[90217]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 10 23:20:38 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate[90232]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 10 23:20:38 np0005480824 bash[90217]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 10 23:20:38 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate[90232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 10 23:20:38 np0005480824 bash[90217]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 10 23:20:38 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate[90232]: --> ceph-volume raw activate successful for osd ID: 2
Oct 10 23:20:38 np0005480824 bash[90217]: --> ceph-volume raw activate successful for osd ID: 2
Oct 10 23:20:38 np0005480824 systemd[1]: libpod-809ed17dee576f81b5c62b469b40277f32cf5afedb2fa8ca62463598c431389a.scope: Deactivated successfully.
Oct 10 23:20:38 np0005480824 systemd[1]: libpod-809ed17dee576f81b5c62b469b40277f32cf5afedb2fa8ca62463598c431389a.scope: Consumed 1.045s CPU time.
Oct 10 23:20:38 np0005480824 conmon[90232]: conmon 809ed17dee576f81b5c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-809ed17dee576f81b5c62b469b40277f32cf5afedb2fa8ca62463598c431389a.scope/container/memory.events
Oct 10 23:20:38 np0005480824 podman[90217]: 2025-10-11 03:20:38.497309046 +0000 UTC m=+1.218932420 container died 809ed17dee576f81b5c62b469b40277f32cf5afedb2fa8ca62463598c431389a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 10 23:20:38 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0d94ca3bb948c2c13134182223a823c6f2478370bb3b6203f714f7a336e10dbd-merged.mount: Deactivated successfully.
Oct 10 23:20:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:20:38 np0005480824 podman[90217]: 2025-10-11 03:20:38.614076068 +0000 UTC m=+1.335699472 container remove 809ed17dee576f81b5c62b469b40277f32cf5afedb2fa8ca62463598c431389a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2-activate, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 23:20:38 np0005480824 podman[90424]: 2025-10-11 03:20:38.885146867 +0000 UTC m=+0.043317644 container create 1ea030e74696bd87ca10210c9afc9c6b7cb96088b5f077a47c4d25c6a9cd0784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 10 23:20:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4100cf42122ec3830809bd529ebb34e2cc0afc34e1a230e4e71c8a71502941b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4100cf42122ec3830809bd529ebb34e2cc0afc34e1a230e4e71c8a71502941b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4100cf42122ec3830809bd529ebb34e2cc0afc34e1a230e4e71c8a71502941b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4100cf42122ec3830809bd529ebb34e2cc0afc34e1a230e4e71c8a71502941b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4100cf42122ec3830809bd529ebb34e2cc0afc34e1a230e4e71c8a71502941b8/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:38 np0005480824 podman[90424]: 2025-10-11 03:20:38.868392974 +0000 UTC m=+0.026563761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:38 np0005480824 podman[90424]: 2025-10-11 03:20:38.980477733 +0000 UTC m=+0.138648500 container init 1ea030e74696bd87ca10210c9afc9c6b7cb96088b5f077a47c4d25c6a9cd0784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 10 23:20:38 np0005480824 podman[90424]: 2025-10-11 03:20:38.987208335 +0000 UTC m=+0.145379102 container start 1ea030e74696bd87ca10210c9afc9c6b7cb96088b5f077a47c4d25c6a9cd0784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 23:20:39 np0005480824 bash[90424]: 1ea030e74696bd87ca10210c9afc9c6b7cb96088b5f077a47c4d25c6a9cd0784
Oct 10 23:20:39 np0005480824 systemd[1]: Started Ceph osd.2 for 92cfe4d4-4917-5be1-9d00-73758793a62b.
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: pidfile_write: ignore empty --pid-file
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b0b6d800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b0b6d800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b0b6d800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b19a5800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b19a5800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b19a5800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b19a5800 /var/lib/ceph/osd/ceph-2/block) close
Oct 10 23:20:39 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3058790338; not ready for session (expect reconnect)
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 23:20:39 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:39 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 23:20:39 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b0b6d800 /var/lib/ceph/osd/ceph-2/block) close
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: load: jerasure load: lrc 
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b1a38c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b1a38c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b1a38c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b1a38c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 10 23:20:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v39: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b1a38c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b1a38c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b1a38c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 23:20:39 np0005480824 ceph-osd[90443]: bdev(0x5607b1a38c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 10 23:20:39 np0005480824 podman[90600]: 2025-10-11 03:20:39.88878294 +0000 UTC m=+0.067112337 container create f7ebc0c980320f8415b1881df9907648b0214f4fda496a05ebc6723368885f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 10 23:20:39 np0005480824 systemd[1]: Started libpod-conmon-f7ebc0c980320f8415b1881df9907648b0214f4fda496a05ebc6723368885f55.scope.
Oct 10 23:20:39 np0005480824 podman[90600]: 2025-10-11 03:20:39.851962933 +0000 UTC m=+0.030292350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:39 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:39 np0005480824 podman[90600]: 2025-10-11 03:20:39.987733143 +0000 UTC m=+0.166062520 container init f7ebc0c980320f8415b1881df9907648b0214f4fda496a05ebc6723368885f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:40 np0005480824 podman[90600]: 2025-10-11 03:20:40.004401894 +0000 UTC m=+0.182731301 container start f7ebc0c980320f8415b1881df9907648b0214f4fda496a05ebc6723368885f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:20:40 np0005480824 podman[90600]: 2025-10-11 03:20:40.009294923 +0000 UTC m=+0.187624320 container attach f7ebc0c980320f8415b1881df9907648b0214f4fda496a05ebc6723368885f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:40 np0005480824 objective_lumiere[90620]: 167 167
Oct 10 23:20:40 np0005480824 systemd[1]: libpod-f7ebc0c980320f8415b1881df9907648b0214f4fda496a05ebc6723368885f55.scope: Deactivated successfully.
Oct 10 23:20:40 np0005480824 podman[90600]: 2025-10-11 03:20:40.01294268 +0000 UTC m=+0.191272067 container died f7ebc0c980320f8415b1881df9907648b0214f4fda496a05ebc6723368885f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 23:20:40 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0b8d863fbef54a7d303d157dea71de39e0f7ddffb423b1d879a1544b828249b9-merged.mount: Deactivated successfully.
Oct 10 23:20:40 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3058790338; not ready for session (expect reconnect)
Oct 10 23:20:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 10 23:20:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 23:20:40 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 23:20:40 np0005480824 podman[90600]: 2025-10-11 03:20:40.082894815 +0000 UTC m=+0.261224192 container remove f7ebc0c980320f8415b1881df9907648b0214f4fda496a05ebc6723368885f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:40 np0005480824 systemd[1]: libpod-conmon-f7ebc0c980320f8415b1881df9907648b0214f4fda496a05ebc6723368885f55.scope: Deactivated successfully.
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bdev(0x5607b1a38c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bdev(0x5607b1a38c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bdev(0x5607b1a38c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bdev(0x5607b1a39400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bdev(0x5607b1a39400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bdev(0x5607b1a39400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:40 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:40 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:40 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluefs mount
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluefs mount shared_bdev_used = 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: RocksDB version: 7.9.2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Git sha 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: DB SUMMARY
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: DB Session ID:  ZAHYVX2U3HOW4PVPYCYM
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: CURRENT file:  CURRENT
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                         Options.error_if_exists: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.create_if_missing: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                                     Options.env: 0x5607b19f7d50
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                                Options.info_log: 0x5607b0bf8a40
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                              Options.statistics: (nil)
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.use_fsync: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                              Options.db_log_dir: 
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                                 Options.wal_dir: db.wal
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.write_buffer_manager: 0x5607b1b08460
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.unordered_write: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.row_cache: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                              Options.wal_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.two_write_queues: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.wal_compression: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.atomic_flush: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.max_background_jobs: 4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.max_background_compactions: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.max_subcompactions: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.max_open_files: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Compression algorithms supported:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kZSTD supported: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kXpressCompression supported: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kBZip2Compression supported: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kLZ4Compression supported: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kZlibCompression supported: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kSnappyCompression supported: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0bf90e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0bf90e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0bf90e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0bf90e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0bf90e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0bf90e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0bf90e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0bf9080)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0bf9080)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0bf9080)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f7219b39-450b-4c4a-9b76-553df00e9556
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152840191632, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152840191813, "job": 1, "event": "recovery_finished"}
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: freelist init
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: freelist _read_cfg
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluefs umount
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bdev(0x5607b1a39400 /var/lib/ceph/osd/ceph-2/block) close
Oct 10 23:20:40 np0005480824 podman[90839]: 2025-10-11 03:20:40.260783739 +0000 UTC m=+0.047600587 container create 77782f51a3089a1d1b7fc113e1d7cfc31acd6474d1eb732c5a8a2d6a9b7e900e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 10 23:20:40 np0005480824 systemd[1]: Started libpod-conmon-77782f51a3089a1d1b7fc113e1d7cfc31acd6474d1eb732c5a8a2d6a9b7e900e.scope.
Oct 10 23:20:40 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:40 np0005480824 podman[90839]: 2025-10-11 03:20:40.238299788 +0000 UTC m=+0.025116656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5ce7b05726521358bd8c96aa0d894928819b42d4288297f6cc65e1e352b34d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5ce7b05726521358bd8c96aa0d894928819b42d4288297f6cc65e1e352b34d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5ce7b05726521358bd8c96aa0d894928819b42d4288297f6cc65e1e352b34d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5ce7b05726521358bd8c96aa0d894928819b42d4288297f6cc65e1e352b34d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:40 np0005480824 podman[90839]: 2025-10-11 03:20:40.356377062 +0000 UTC m=+0.143193910 container init 77782f51a3089a1d1b7fc113e1d7cfc31acd6474d1eb732c5a8a2d6a9b7e900e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_turing, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:40 np0005480824 podman[90839]: 2025-10-11 03:20:40.364514118 +0000 UTC m=+0.151330976 container start 77782f51a3089a1d1b7fc113e1d7cfc31acd6474d1eb732c5a8a2d6a9b7e900e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_turing, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:40 np0005480824 podman[90839]: 2025-10-11 03:20:40.373026163 +0000 UTC m=+0.159843011 container attach 77782f51a3089a1d1b7fc113e1d7cfc31acd6474d1eb732c5a8a2d6a9b7e900e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_turing, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:20:40 np0005480824 ceph-osd[89401]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 28.076 iops: 7187.346 elapsed_sec: 0.417
Oct 10 23:20:40 np0005480824 ceph-osd[89401]: log_channel(cluster) log [WRN] : OSD bench result of 7187.346062 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 23:20:40 np0005480824 ceph-osd[89401]: osd.1 0 waiting for initial osdmap
Oct 10 23:20:40 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1[89397]: 2025-10-11T03:20:40.389+0000 7f1485d71640 -1 osd.1 0 waiting for initial osdmap
Oct 10 23:20:40 np0005480824 ceph-osd[89401]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 10 23:20:40 np0005480824 ceph-osd[89401]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 10 23:20:40 np0005480824 ceph-osd[89401]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 10 23:20:40 np0005480824 ceph-osd[89401]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bdev(0x5607b1a39400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bdev(0x5607b1a39400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bdev(0x5607b1a39400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluefs mount
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluefs mount shared_bdev_used = 4718592
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 10 23:20:40 np0005480824 ceph-osd[89401]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 10 23:20:40 np0005480824 ceph-osd[89401]: osd.1 12 set_numa_affinity not setting numa affinity
Oct 10 23:20:40 np0005480824 ceph-osd[89401]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Oct 10 23:20:40 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-1[89397]: 2025-10-11T03:20:40.430+0000 7f1481399640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: RocksDB version: 7.9.2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Git sha 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: DB SUMMARY
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: DB Session ID:  ZAHYVX2U3HOW4PVPYCYN
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: CURRENT file:  CURRENT
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                         Options.error_if_exists: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.create_if_missing: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                                     Options.env: 0x5607b1bb83f0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                                Options.info_log: 0x5607b0bf9ec0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                              Options.statistics: (nil)
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.use_fsync: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                              Options.db_log_dir: 
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                                 Options.wal_dir: db.wal
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.write_buffer_manager: 0x5607b1b08a00
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.unordered_write: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.row_cache: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                              Options.wal_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.two_write_queues: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.wal_compression: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.atomic_flush: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.max_background_jobs: 4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.max_background_compactions: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.max_subcompactions: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.max_open_files: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Compression algorithms supported:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kZSTD supported: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kXpressCompression supported: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kBZip2Compression supported: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kLZ4Compression supported: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kZlibCompression supported: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: #011kSnappyCompression supported: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0c77380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0c77380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0c77380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0c77380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0c77380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0c77380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b0c77380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b19f3ae0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b19f3ae0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:           Options.merge_operator: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5607b19f3ae0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5607b0be0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.compression: LZ4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.num_levels: 7
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.bloom_locality: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                               Options.ttl: 2592000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                       Options.enable_blob_files: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                           Options.min_blob_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f7219b39-450b-4c4a-9b76-553df00e9556
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152840470002, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152840473964, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152840, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f7219b39-450b-4c4a-9b76-553df00e9556", "db_session_id": "ZAHYVX2U3HOW4PVPYCYN", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152840477205, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152840, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f7219b39-450b-4c4a-9b76-553df00e9556", "db_session_id": "ZAHYVX2U3HOW4PVPYCYN", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152840480905, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152840, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f7219b39-450b-4c4a-9b76-553df00e9556", "db_session_id": "ZAHYVX2U3HOW4PVPYCYN", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760152840483766, "job": 1, "event": "recovery_finished"}
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5607b1bc5c00
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: DB pointer 0x5607b0c1ba00
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5607b0be0dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 8.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5607b0be0dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 8.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: _get_class not permitted to load lua
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: _get_class not permitted to load sdk
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: _get_class not permitted to load test_remote_reads
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: osd.2 0 load_pgs
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: osd.2 0 load_pgs opened 0 pgs
Oct 10 23:20:40 np0005480824 ceph-osd[90443]: osd.2 0 log_to_monitors true
Oct 10 23:20:40 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2[90439]: 2025-10-11T03:20:40.516+0000 7fa2ce1ed740 -1 osd.2 0 log_to_monitors true
Oct 10 23:20:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct 10 23:20:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2572473479,v1:192.168.122.100:6811/2572473479]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 10 23:20:41 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3058790338; not ready for session (expect reconnect)
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 23:20:41 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: OSD bench result of 7187.346062 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: from='osd.2 [v2:192.168.122.100:6810/2572473479,v1:192.168.122.100:6811/2572473479]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2572473479,v1:192.168.122.100:6811/2572473479]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/3058790338,v1:192.168.122.100:6807/3058790338] boot
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2572473479,v1:192.168.122.100:6811/2572473479]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:41 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:41 np0005480824 ceph-osd[89401]: osd.1 13 state: booting -> active
Oct 10 23:20:41 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:20:41 np0005480824 great_turing[90856]: {
Oct 10 23:20:41 np0005480824 great_turing[90856]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "osd_id": 0,
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "type": "bluestore"
Oct 10 23:20:41 np0005480824 great_turing[90856]:    },
Oct 10 23:20:41 np0005480824 great_turing[90856]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "osd_id": 1,
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "type": "bluestore"
Oct 10 23:20:41 np0005480824 great_turing[90856]:    },
Oct 10 23:20:41 np0005480824 great_turing[90856]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "osd_id": 2,
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:20:41 np0005480824 great_turing[90856]:        "type": "bluestore"
Oct 10 23:20:41 np0005480824 great_turing[90856]:    }
Oct 10 23:20:41 np0005480824 great_turing[90856]: }
Oct 10 23:20:41 np0005480824 systemd[1]: libpod-77782f51a3089a1d1b7fc113e1d7cfc31acd6474d1eb732c5a8a2d6a9b7e900e.scope: Deactivated successfully.
Oct 10 23:20:41 np0005480824 conmon[90856]: conmon 77782f51a3089a1d1b7f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77782f51a3089a1d1b7fc113e1d7cfc31acd6474d1eb732c5a8a2d6a9b7e900e.scope/container/memory.events
Oct 10 23:20:41 np0005480824 podman[90839]: 2025-10-11 03:20:41.360723772 +0000 UTC m=+1.147540640 container died 77782f51a3089a1d1b7fc113e1d7cfc31acd6474d1eb732c5a8a2d6a9b7e900e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:41 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6b5ce7b05726521358bd8c96aa0d894928819b42d4288297f6cc65e1e352b34d-merged.mount: Deactivated successfully.
Oct 10 23:20:41 np0005480824 podman[90839]: 2025-10-11 03:20:41.409494677 +0000 UTC m=+1.196311525 container remove 77782f51a3089a1d1b7fc113e1d7cfc31acd6474d1eb732c5a8a2d6a9b7e900e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 10 23:20:41 np0005480824 systemd[1]: libpod-conmon-77782f51a3089a1d1b7fc113e1d7cfc31acd6474d1eb732c5a8a2d6a9b7e900e.scope: Deactivated successfully.
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:20:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:41 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 10 23:20:41 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 10 23:20:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: from='osd.2 [v2:192.168.122.100:6810/2572473479,v1:192.168.122.100:6811/2572473479]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: osd.1 [v2:192.168.122.100:6806/3058790338,v1:192.168.122.100:6807/3058790338] boot
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: from='osd.2 [v2:192.168.122.100:6810/2572473479,v1:192.168.122.100:6811/2572473479]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2572473479,v1:192.168.122.100:6811/2572473479]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Oct 10 23:20:42 np0005480824 ceph-osd[90443]: osd.2 0 done with init, starting boot process
Oct 10 23:20:42 np0005480824 ceph-osd[90443]: osd.2 0 start_boot
Oct 10 23:20:42 np0005480824 ceph-osd[90443]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 10 23:20:42 np0005480824 ceph-osd[90443]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 10 23:20:42 np0005480824 ceph-osd[90443]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 10 23:20:42 np0005480824 ceph-osd[90443]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 10 23:20:42 np0005480824 ceph-osd[90443]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:42 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:42 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2572473479; not ready for session (expect reconnect)
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:42 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:20:42 np0005480824 ceph-mgr[74617]: [devicehealth INFO root] creating main.db for devicehealth
Oct 10 23:20:42 np0005480824 podman[91337]: 2025-10-11 03:20:42.49990268 +0000 UTC m=+0.115964354 container exec a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:20:42 np0005480824 podman[91337]: 2025-10-11 03:20:42.63405127 +0000 UTC m=+0.250112904 container exec_died a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:20:42 np0005480824 ceph-mgr[74617]: [devicehealth INFO root] Check health
Oct 10 23:20:42 np0005480824 ceph-mgr[74617]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 10 23:20:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 10 23:20:43 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2572473479; not ready for session (expect reconnect)
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:43 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: from='osd.2 [v2:192.168.122.100:6810/2572473479,v1:192.168.122.100:6811/2572473479]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:43 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:20:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 23:20:44 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2572473479; not ready for session (expect reconnect)
Oct 10 23:20:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:44 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:44 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:44 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:44 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.pdyrua(active, since 76s)
Oct 10 23:20:45 np0005480824 podman[91739]: 2025-10-11 03:20:45.009215747 +0000 UTC m=+0.056968983 container create 9dd09e7285cc59cbe4c701537c0e93c690859e71b75a82d64c477c0a8f558e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_boyd, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:45 np0005480824 systemd[1]: Started libpod-conmon-9dd09e7285cc59cbe4c701537c0e93c690859e71b75a82d64c477c0a8f558e05.scope.
Oct 10 23:20:45 np0005480824 podman[91739]: 2025-10-11 03:20:44.976470268 +0000 UTC m=+0.024223494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:45 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:45 np0005480824 podman[91739]: 2025-10-11 03:20:45.150427388 +0000 UTC m=+0.198180634 container init 9dd09e7285cc59cbe4c701537c0e93c690859e71b75a82d64c477c0a8f558e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_boyd, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:45 np0005480824 podman[91739]: 2025-10-11 03:20:45.158707927 +0000 UTC m=+0.206461153 container start 9dd09e7285cc59cbe4c701537c0e93c690859e71b75a82d64c477c0a8f558e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_boyd, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:20:45 np0005480824 compassionate_boyd[91756]: 167 167
Oct 10 23:20:45 np0005480824 systemd[1]: libpod-9dd09e7285cc59cbe4c701537c0e93c690859e71b75a82d64c477c0a8f558e05.scope: Deactivated successfully.
Oct 10 23:20:45 np0005480824 podman[91739]: 2025-10-11 03:20:45.187815018 +0000 UTC m=+0.235568344 container attach 9dd09e7285cc59cbe4c701537c0e93c690859e71b75a82d64c477c0a8f558e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 10 23:20:45 np0005480824 podman[91739]: 2025-10-11 03:20:45.188918005 +0000 UTC m=+0.236671251 container died 9dd09e7285cc59cbe4c701537c0e93c690859e71b75a82d64c477c0a8f558e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:20:45 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d355f5c7ab2220b93e5625a5ea0fba2e9d4cf9971c907d92590f4f6fe501065d-merged.mount: Deactivated successfully.
Oct 10 23:20:45 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2572473479; not ready for session (expect reconnect)
Oct 10 23:20:45 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:45 np0005480824 podman[91739]: 2025-10-11 03:20:45.324193464 +0000 UTC m=+0.371946690 container remove 9dd09e7285cc59cbe4c701537c0e93c690859e71b75a82d64c477c0a8f558e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_boyd, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 10 23:20:45 np0005480824 systemd[1]: libpod-conmon-9dd09e7285cc59cbe4c701537c0e93c690859e71b75a82d64c477c0a8f558e05.scope: Deactivated successfully.
Oct 10 23:20:45 np0005480824 podman[91781]: 2025-10-11 03:20:45.524140829 +0000 UTC m=+0.053350246 container create 7d54803f7c0ff58806d497cb4ca8c0d8a2f7fb9ddcaace6563d9f12d9f0488d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:45 np0005480824 systemd[1]: Started libpod-conmon-7d54803f7c0ff58806d497cb4ca8c0d8a2f7fb9ddcaace6563d9f12d9f0488d0.scope.
Oct 10 23:20:45 np0005480824 podman[91781]: 2025-10-11 03:20:45.504695781 +0000 UTC m=+0.033905188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:45 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0f901a6d28c817ed0abda27a05033ca20a0573142bd02c699b0b38fb89f404/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0f901a6d28c817ed0abda27a05033ca20a0573142bd02c699b0b38fb89f404/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0f901a6d28c817ed0abda27a05033ca20a0573142bd02c699b0b38fb89f404/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0f901a6d28c817ed0abda27a05033ca20a0573142bd02c699b0b38fb89f404/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:45 np0005480824 podman[91781]: 2025-10-11 03:20:45.664554741 +0000 UTC m=+0.193764228 container init 7d54803f7c0ff58806d497cb4ca8c0d8a2f7fb9ddcaace6563d9f12d9f0488d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:20:45 np0005480824 podman[91781]: 2025-10-11 03:20:45.676208592 +0000 UTC m=+0.205418039 container start 7d54803f7c0ff58806d497cb4ca8c0d8a2f7fb9ddcaace6563d9f12d9f0488d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:45 np0005480824 podman[91781]: 2025-10-11 03:20:45.700019935 +0000 UTC m=+0.229229332 container attach 7d54803f7c0ff58806d497cb4ca8c0d8a2f7fb9ddcaace6563d9f12d9f0488d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 23:20:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 23:20:46 np0005480824 ceph-osd[90443]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 25.452 iops: 6515.799 elapsed_sec: 0.460
Oct 10 23:20:46 np0005480824 ceph-osd[90443]: log_channel(cluster) log [WRN] : OSD bench result of 6515.799271 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 23:20:46 np0005480824 ceph-osd[90443]: osd.2 0 waiting for initial osdmap
Oct 10 23:20:46 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2[90439]: 2025-10-11T03:20:46.215+0000 7fa2ca16d640 -1 osd.2 0 waiting for initial osdmap
Oct 10 23:20:46 np0005480824 ceph-osd[90443]: osd.2 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 10 23:20:46 np0005480824 ceph-osd[90443]: osd.2 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 10 23:20:46 np0005480824 ceph-osd[90443]: osd.2 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 10 23:20:46 np0005480824 ceph-osd[90443]: osd.2 15 check_osdmap_features require_osd_release unknown -> reef
Oct 10 23:20:46 np0005480824 ceph-osd[90443]: osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 10 23:20:46 np0005480824 ceph-osd[90443]: osd.2 15 set_numa_affinity not setting numa affinity
Oct 10 23:20:46 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-osd-2[90439]: 2025-10-11T03:20:46.244+0000 7fa2c5795640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 10 23:20:46 np0005480824 ceph-osd[90443]: osd.2 15 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Oct 10 23:20:46 np0005480824 ceph-mgr[74617]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2572473479; not ready for session (expect reconnect)
Oct 10 23:20:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:46 np0005480824 ceph-mgr[74617]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 10 23:20:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct 10 23:20:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Oct 10 23:20:46 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2572473479,v1:192.168.122.100:6811/2572473479] boot
Oct 10 23:20:46 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Oct 10 23:20:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 10 23:20:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 23:20:46 np0005480824 ceph-osd[90443]: osd.2 16 state: booting -> active
Oct 10 23:20:47 np0005480824 cool_murdock[91797]: [
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:    {
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:        "available": false,
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:        "ceph_device": false,
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:        "lsm_data": {},
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:        "lvs": [],
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:        "path": "/dev/sr0",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:        "rejected_reasons": [
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "Has a FileSystem",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "Insufficient space (<5GB)"
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:        ],
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:        "sys_api": {
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "actuators": null,
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "device_nodes": "sr0",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "devname": "sr0",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "human_readable_size": "482.00 KB",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "id_bus": "ata",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "model": "QEMU DVD-ROM",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "nr_requests": "2",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "parent": "/dev/sr0",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "partitions": {},
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "path": "/dev/sr0",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "removable": "1",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "rev": "2.5+",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "ro": "0",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "rotational": "0",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "sas_address": "",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "sas_device_handle": "",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "scheduler_mode": "mq-deadline",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "sectors": 0,
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "sectorsize": "2048",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "size": 493568.0,
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "support_discard": "2048",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "type": "disk",
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:            "vendor": "QEMU"
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:        }
Oct 10 23:20:47 np0005480824 cool_murdock[91797]:    }
Oct 10 23:20:47 np0005480824 cool_murdock[91797]: ]
Oct 10 23:20:47 np0005480824 systemd[1]: libpod-7d54803f7c0ff58806d497cb4ca8c0d8a2f7fb9ddcaace6563d9f12d9f0488d0.scope: Deactivated successfully.
Oct 10 23:20:47 np0005480824 systemd[1]: libpod-7d54803f7c0ff58806d497cb4ca8c0d8a2f7fb9ddcaace6563d9f12d9f0488d0.scope: Consumed 1.436s CPU time.
Oct 10 23:20:47 np0005480824 podman[91781]: 2025-10-11 03:20:47.066711428 +0000 UTC m=+1.595920865 container died 7d54803f7c0ff58806d497cb4ca8c0d8a2f7fb9ddcaace6563d9f12d9f0488d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:47 np0005480824 systemd[1]: var-lib-containers-storage-overlay-2f0f901a6d28c817ed0abda27a05033ca20a0573142bd02c699b0b38fb89f404-merged.mount: Deactivated successfully.
Oct 10 23:20:47 np0005480824 podman[91781]: 2025-10-11 03:20:47.128112234 +0000 UTC m=+1.657321641 container remove 7d54803f7c0ff58806d497cb4ca8c0d8a2f7fb9ddcaace6563d9f12d9f0488d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 10 23:20:47 np0005480824 systemd[1]: libpod-conmon-7d54803f7c0ff58806d497cb4ca8c0d8a2f7fb9ddcaace6563d9f12d9f0488d0.scope: Deactivated successfully.
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 10 23:20:47 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43699k
Oct 10 23:20:47 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43699k
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct 10 23:20:47 np0005480824 ceph-mgr[74617]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44747844: error parsing value: Value '44747844' is below minimum 939524096
Oct 10 23:20:47 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44747844: error parsing value: Value '44747844' is below minimum 939524096
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:47 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 7e34ea86-9c83-4d06-855f-4653b4a638bc does not exist
Oct 10 23:20:47 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev a87ca41d-5dad-45cd-a1ad-119c0fb24ce1 does not exist
Oct 10 23:20:47 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 0397422c-9ff0-4a52-838a-45774a3e7e3a does not exist
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: OSD bench result of 6515.799271 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: osd.2 [v2:192.168.122.100:6810/2572473479,v1:192.168.122.100:6811/2572473479] boot
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:47 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:20:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 10 23:20:47 np0005480824 podman[93781]: 2025-10-11 03:20:47.951718312 +0000 UTC m=+0.068573624 container create d672cdd42b2cfb5de0891bc0045b95d5bdbb9f723dba27ae584b9e5c50e882c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:47 np0005480824 systemd[1]: Started libpod-conmon-d672cdd42b2cfb5de0891bc0045b95d5bdbb9f723dba27ae584b9e5c50e882c3.scope.
Oct 10 23:20:48 np0005480824 podman[93781]: 2025-10-11 03:20:47.92423694 +0000 UTC m=+0.041092312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:48 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:48 np0005480824 podman[93781]: 2025-10-11 03:20:48.053897251 +0000 UTC m=+0.170752623 container init d672cdd42b2cfb5de0891bc0045b95d5bdbb9f723dba27ae584b9e5c50e882c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nash, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 10 23:20:48 np0005480824 podman[93781]: 2025-10-11 03:20:48.065373909 +0000 UTC m=+0.182229221 container start d672cdd42b2cfb5de0891bc0045b95d5bdbb9f723dba27ae584b9e5c50e882c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nash, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 10 23:20:48 np0005480824 podman[93781]: 2025-10-11 03:20:48.070113691 +0000 UTC m=+0.186969063 container attach d672cdd42b2cfb5de0891bc0045b95d5bdbb9f723dba27ae584b9e5c50e882c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:48 np0005480824 vibrant_nash[93798]: 167 167
Oct 10 23:20:48 np0005480824 systemd[1]: libpod-d672cdd42b2cfb5de0891bc0045b95d5bdbb9f723dba27ae584b9e5c50e882c3.scope: Deactivated successfully.
Oct 10 23:20:48 np0005480824 conmon[93798]: conmon d672cdd42b2cfb5de089 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d672cdd42b2cfb5de0891bc0045b95d5bdbb9f723dba27ae584b9e5c50e882c3.scope/container/memory.events
Oct 10 23:20:48 np0005480824 podman[93781]: 2025-10-11 03:20:48.076392047 +0000 UTC m=+0.193247369 container died d672cdd42b2cfb5de0891bc0045b95d5bdbb9f723dba27ae584b9e5c50e882c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nash, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:20:48 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0fb17ac9474f35e7de66900212779a42649f0565e9edc6b3031717596452805d-merged.mount: Deactivated successfully.
Oct 10 23:20:48 np0005480824 podman[93781]: 2025-10-11 03:20:48.119749271 +0000 UTC m=+0.236604553 container remove d672cdd42b2cfb5de0891bc0045b95d5bdbb9f723dba27ae584b9e5c50e882c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:48 np0005480824 systemd[1]: libpod-conmon-d672cdd42b2cfb5de0891bc0045b95d5bdbb9f723dba27ae584b9e5c50e882c3.scope: Deactivated successfully.
Oct 10 23:20:48 np0005480824 podman[93821]: 2025-10-11 03:20:48.310043121 +0000 UTC m=+0.046517669 container create 12f27e0ee93e433e9de37f55392212adc98d285432f0b845ea6b637f6d998219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 10 23:20:48 np0005480824 systemd[1]: Started libpod-conmon-12f27e0ee93e433e9de37f55392212adc98d285432f0b845ea6b637f6d998219.scope.
Oct 10 23:20:48 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9f6a5e63b0fcfb5e753c47af362df602dc2aff44936efcb417488dbf28dc7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9f6a5e63b0fcfb5e753c47af362df602dc2aff44936efcb417488dbf28dc7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9f6a5e63b0fcfb5e753c47af362df602dc2aff44936efcb417488dbf28dc7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9f6a5e63b0fcfb5e753c47af362df602dc2aff44936efcb417488dbf28dc7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9f6a5e63b0fcfb5e753c47af362df602dc2aff44936efcb417488dbf28dc7b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:48 np0005480824 podman[93821]: 2025-10-11 03:20:48.28905376 +0000 UTC m=+0.025528328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:48 np0005480824 podman[93821]: 2025-10-11 03:20:48.392247563 +0000 UTC m=+0.128722131 container init 12f27e0ee93e433e9de37f55392212adc98d285432f0b845ea6b637f6d998219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 23:20:48 np0005480824 podman[93821]: 2025-10-11 03:20:48.402978594 +0000 UTC m=+0.139453142 container start 12f27e0ee93e433e9de37f55392212adc98d285432f0b845ea6b637f6d998219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 10 23:20:48 np0005480824 podman[93821]: 2025-10-11 03:20:48.406449545 +0000 UTC m=+0.142924093 container attach 12f27e0ee93e433e9de37f55392212adc98d285432f0b845ea6b637f6d998219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 23:20:48 np0005480824 ceph-mon[74326]: Adjusting osd_memory_target on compute-0 to 43699k
Oct 10 23:20:48 np0005480824 ceph-mon[74326]: Unable to set osd_memory_target on compute-0 to 44747844: error parsing value: Value '44747844' is below minimum 939524096
Oct 10 23:20:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:20:49 np0005480824 nervous_newton[93838]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:20:49 np0005480824 nervous_newton[93838]: --> relative data size: 1.0
Oct 10 23:20:49 np0005480824 nervous_newton[93838]: --> All data devices are unavailable
Oct 10 23:20:49 np0005480824 systemd[1]: libpod-12f27e0ee93e433e9de37f55392212adc98d285432f0b845ea6b637f6d998219.scope: Deactivated successfully.
Oct 10 23:20:49 np0005480824 systemd[1]: libpod-12f27e0ee93e433e9de37f55392212adc98d285432f0b845ea6b637f6d998219.scope: Consumed 1.085s CPU time.
Oct 10 23:20:49 np0005480824 podman[93821]: 2025-10-11 03:20:49.538107616 +0000 UTC m=+1.274582254 container died 12f27e0ee93e433e9de37f55392212adc98d285432f0b845ea6b637f6d998219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:20:49 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ea9f6a5e63b0fcfb5e753c47af362df602dc2aff44936efcb417488dbf28dc7b-merged.mount: Deactivated successfully.
Oct 10 23:20:49 np0005480824 podman[93821]: 2025-10-11 03:20:49.637282915 +0000 UTC m=+1.373757493 container remove 12f27e0ee93e433e9de37f55392212adc98d285432f0b845ea6b637f6d998219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:49 np0005480824 systemd[1]: libpod-conmon-12f27e0ee93e433e9de37f55392212adc98d285432f0b845ea6b637f6d998219.scope: Deactivated successfully.
Oct 10 23:20:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 10 23:20:50 np0005480824 podman[94021]: 2025-10-11 03:20:50.393531088 +0000 UTC m=+0.050131014 container create 734a6bc44e025fe0fd4bddfc870ae30b80c09002590bcba0611b1f377915333c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:50 np0005480824 systemd[1]: Started libpod-conmon-734a6bc44e025fe0fd4bddfc870ae30b80c09002590bcba0611b1f377915333c.scope.
Oct 10 23:20:50 np0005480824 podman[94021]: 2025-10-11 03:20:50.371732169 +0000 UTC m=+0.028332115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:50 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:50 np0005480824 podman[94021]: 2025-10-11 03:20:50.4911421 +0000 UTC m=+0.147742066 container init 734a6bc44e025fe0fd4bddfc870ae30b80c09002590bcba0611b1f377915333c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 10 23:20:50 np0005480824 podman[94021]: 2025-10-11 03:20:50.501632925 +0000 UTC m=+0.158232861 container start 734a6bc44e025fe0fd4bddfc870ae30b80c09002590bcba0611b1f377915333c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:20:50 np0005480824 podman[94021]: 2025-10-11 03:20:50.505154898 +0000 UTC m=+0.161754824 container attach 734a6bc44e025fe0fd4bddfc870ae30b80c09002590bcba0611b1f377915333c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elbakyan, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:20:50 np0005480824 unruffled_elbakyan[94037]: 167 167
Oct 10 23:20:50 np0005480824 systemd[1]: libpod-734a6bc44e025fe0fd4bddfc870ae30b80c09002590bcba0611b1f377915333c.scope: Deactivated successfully.
Oct 10 23:20:50 np0005480824 podman[94021]: 2025-10-11 03:20:50.508641239 +0000 UTC m=+0.165241235 container died 734a6bc44e025fe0fd4bddfc870ae30b80c09002590bcba0611b1f377915333c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elbakyan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 10 23:20:50 np0005480824 systemd[1]: var-lib-containers-storage-overlay-4ccaabfa1d6b8cdab2ea53331a44dbf8a2291ee8be8183f652308efa8bb8d48e-merged.mount: Deactivated successfully.
Oct 10 23:20:50 np0005480824 podman[94021]: 2025-10-11 03:20:50.567296601 +0000 UTC m=+0.223896537 container remove 734a6bc44e025fe0fd4bddfc870ae30b80c09002590bcba0611b1f377915333c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:50 np0005480824 systemd[1]: libpod-conmon-734a6bc44e025fe0fd4bddfc870ae30b80c09002590bcba0611b1f377915333c.scope: Deactivated successfully.
Oct 10 23:20:50 np0005480824 podman[94060]: 2025-10-11 03:20:50.818744941 +0000 UTC m=+0.064856548 container create 4881f1a77f70cefcab6d3b9ab564283d18fd87f2fca8ea6ecdccc06234b286c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lichterman, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:50 np0005480824 systemd[1]: Started libpod-conmon-4881f1a77f70cefcab6d3b9ab564283d18fd87f2fca8ea6ecdccc06234b286c4.scope.
Oct 10 23:20:50 np0005480824 podman[94060]: 2025-10-11 03:20:50.796175983 +0000 UTC m=+0.042287560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:50 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1179103a373f687a4e6fcb2775e41089d6f1ae6a37b2e6765ead3ac4c377da5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1179103a373f687a4e6fcb2775e41089d6f1ae6a37b2e6765ead3ac4c377da5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1179103a373f687a4e6fcb2775e41089d6f1ae6a37b2e6765ead3ac4c377da5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1179103a373f687a4e6fcb2775e41089d6f1ae6a37b2e6765ead3ac4c377da5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:50 np0005480824 podman[94060]: 2025-10-11 03:20:50.913407314 +0000 UTC m=+0.159518881 container init 4881f1a77f70cefcab6d3b9ab564283d18fd87f2fca8ea6ecdccc06234b286c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lichterman, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:50 np0005480824 podman[94060]: 2025-10-11 03:20:50.926344237 +0000 UTC m=+0.172455804 container start 4881f1a77f70cefcab6d3b9ab564283d18fd87f2fca8ea6ecdccc06234b286c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:50 np0005480824 podman[94060]: 2025-10-11 03:20:50.930447413 +0000 UTC m=+0.176559010 container attach 4881f1a77f70cefcab6d3b9ab564283d18fd87f2fca8ea6ecdccc06234b286c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lichterman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]: {
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:    "0": [
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:        {
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "devices": [
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "/dev/loop3"
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            ],
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_name": "ceph_lv0",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_size": "21470642176",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "name": "ceph_lv0",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "tags": {
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.cluster_name": "ceph",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.crush_device_class": "",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.encrypted": "0",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.osd_id": "0",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.type": "block",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.vdo": "0"
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            },
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "type": "block",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "vg_name": "ceph_vg0"
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:        }
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:    ],
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:    "1": [
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:        {
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "devices": [
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "/dev/loop4"
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            ],
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_name": "ceph_lv1",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_size": "21470642176",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "name": "ceph_lv1",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "tags": {
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.cluster_name": "ceph",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.crush_device_class": "",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.encrypted": "0",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.osd_id": "1",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.type": "block",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.vdo": "0"
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            },
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "type": "block",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "vg_name": "ceph_vg1"
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:        }
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:    ],
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:    "2": [
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:        {
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "devices": [
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "/dev/loop5"
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            ],
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_name": "ceph_lv2",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_size": "21470642176",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "name": "ceph_lv2",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "tags": {
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.cluster_name": "ceph",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.crush_device_class": "",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.encrypted": "0",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.osd_id": "2",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.type": "block",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:                "ceph.vdo": "0"
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            },
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "type": "block",
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:            "vg_name": "ceph_vg2"
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:        }
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]:    ]
Oct 10 23:20:51 np0005480824 friendly_lichterman[94077]: }
Oct 10 23:20:51 np0005480824 systemd[1]: libpod-4881f1a77f70cefcab6d3b9ab564283d18fd87f2fca8ea6ecdccc06234b286c4.scope: Deactivated successfully.
Oct 10 23:20:51 np0005480824 podman[94060]: 2025-10-11 03:20:51.736372526 +0000 UTC m=+0.982484123 container died 4881f1a77f70cefcab6d3b9ab564283d18fd87f2fca8ea6ecdccc06234b286c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lichterman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:51 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c1179103a373f687a4e6fcb2775e41089d6f1ae6a37b2e6765ead3ac4c377da5-merged.mount: Deactivated successfully.
Oct 10 23:20:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 10 23:20:51 np0005480824 podman[94060]: 2025-10-11 03:20:51.849972223 +0000 UTC m=+1.096083820 container remove 4881f1a77f70cefcab6d3b9ab564283d18fd87f2fca8ea6ecdccc06234b286c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lichterman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 10 23:20:51 np0005480824 systemd[1]: libpod-conmon-4881f1a77f70cefcab6d3b9ab564283d18fd87f2fca8ea6ecdccc06234b286c4.scope: Deactivated successfully.
Oct 10 23:20:52 np0005480824 podman[94240]: 2025-10-11 03:20:52.537039168 +0000 UTC m=+0.051502205 container create 8876b4da28ac759216a40aeb397ab699f0e74fc432af8215c723cf7adc0a4630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 10 23:20:52 np0005480824 systemd[1]: Started libpod-conmon-8876b4da28ac759216a40aeb397ab699f0e74fc432af8215c723cf7adc0a4630.scope.
Oct 10 23:20:52 np0005480824 podman[94240]: 2025-10-11 03:20:52.509808592 +0000 UTC m=+0.024271699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:52 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:52 np0005480824 podman[94240]: 2025-10-11 03:20:52.632716506 +0000 UTC m=+0.147179603 container init 8876b4da28ac759216a40aeb397ab699f0e74fc432af8215c723cf7adc0a4630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:52 np0005480824 podman[94240]: 2025-10-11 03:20:52.644616394 +0000 UTC m=+0.159079451 container start 8876b4da28ac759216a40aeb397ab699f0e74fc432af8215c723cf7adc0a4630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:20:52 np0005480824 friendly_brown[94256]: 167 167
Oct 10 23:20:52 np0005480824 podman[94240]: 2025-10-11 03:20:52.650383478 +0000 UTC m=+0.164846585 container attach 8876b4da28ac759216a40aeb397ab699f0e74fc432af8215c723cf7adc0a4630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 10 23:20:52 np0005480824 systemd[1]: libpod-8876b4da28ac759216a40aeb397ab699f0e74fc432af8215c723cf7adc0a4630.scope: Deactivated successfully.
Oct 10 23:20:52 np0005480824 podman[94240]: 2025-10-11 03:20:52.651510735 +0000 UTC m=+0.165973782 container died 8876b4da28ac759216a40aeb397ab699f0e74fc432af8215c723cf7adc0a4630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 10 23:20:52 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ecfbc86b4e619acba875cb53e63d4d45117c1ff33fda67df5860d0d1f454abb2-merged.mount: Deactivated successfully.
Oct 10 23:20:52 np0005480824 podman[94240]: 2025-10-11 03:20:52.698951114 +0000 UTC m=+0.213414171 container remove 8876b4da28ac759216a40aeb397ab699f0e74fc432af8215c723cf7adc0a4630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:52 np0005480824 systemd[1]: libpod-conmon-8876b4da28ac759216a40aeb397ab699f0e74fc432af8215c723cf7adc0a4630.scope: Deactivated successfully.
Oct 10 23:20:52 np0005480824 podman[94280]: 2025-10-11 03:20:52.929448983 +0000 UTC m=+0.065999784 container create a781ada0404e89d67f3e57165b665f074f2fd7ae30a7c3e8a3bf8b22334dd65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 10 23:20:52 np0005480824 systemd[1]: Started libpod-conmon-a781ada0404e89d67f3e57165b665f074f2fd7ae30a7c3e8a3bf8b22334dd65b.scope.
Oct 10 23:20:52 np0005480824 podman[94280]: 2025-10-11 03:20:52.900784234 +0000 UTC m=+0.037335105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:52 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:53 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203015571e6d020ca5626119e7c98fa4f8cc2d160101149acac2d4523cb8adc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:53 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203015571e6d020ca5626119e7c98fa4f8cc2d160101149acac2d4523cb8adc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:53 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203015571e6d020ca5626119e7c98fa4f8cc2d160101149acac2d4523cb8adc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:53 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203015571e6d020ca5626119e7c98fa4f8cc2d160101149acac2d4523cb8adc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:53 np0005480824 podman[94280]: 2025-10-11 03:20:53.022350207 +0000 UTC m=+0.158901118 container init a781ada0404e89d67f3e57165b665f074f2fd7ae30a7c3e8a3bf8b22334dd65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bouman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 10 23:20:53 np0005480824 podman[94280]: 2025-10-11 03:20:53.031164542 +0000 UTC m=+0.167715373 container start a781ada0404e89d67f3e57165b665f074f2fd7ae30a7c3e8a3bf8b22334dd65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bouman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:20:53 np0005480824 podman[94280]: 2025-10-11 03:20:53.037263724 +0000 UTC m=+0.173814515 container attach a781ada0404e89d67f3e57165b665f074f2fd7ae30a7c3e8a3bf8b22334dd65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 10 23:20:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:20:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]: {
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "osd_id": 0,
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "type": "bluestore"
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:    },
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "osd_id": 1,
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "type": "bluestore"
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:    },
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "osd_id": 2,
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:        "type": "bluestore"
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]:    }
Oct 10 23:20:54 np0005480824 pensive_bouman[94297]: }
Oct 10 23:20:54 np0005480824 systemd[1]: libpod-a781ada0404e89d67f3e57165b665f074f2fd7ae30a7c3e8a3bf8b22334dd65b.scope: Deactivated successfully.
Oct 10 23:20:54 np0005480824 systemd[1]: libpod-a781ada0404e89d67f3e57165b665f074f2fd7ae30a7c3e8a3bf8b22334dd65b.scope: Consumed 1.058s CPU time.
Oct 10 23:20:54 np0005480824 podman[94280]: 2025-10-11 03:20:54.080458588 +0000 UTC m=+1.217009489 container died a781ada0404e89d67f3e57165b665f074f2fd7ae30a7c3e8a3bf8b22334dd65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bouman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:54 np0005480824 systemd[1]: var-lib-containers-storage-overlay-203015571e6d020ca5626119e7c98fa4f8cc2d160101149acac2d4523cb8adc8-merged.mount: Deactivated successfully.
Oct 10 23:20:54 np0005480824 podman[94280]: 2025-10-11 03:20:54.139117498 +0000 UTC m=+1.275668289 container remove a781ada0404e89d67f3e57165b665f074f2fd7ae30a7c3e8a3bf8b22334dd65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bouman, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 10 23:20:54 np0005480824 systemd[1]: libpod-conmon-a781ada0404e89d67f3e57165b665f074f2fd7ae30a7c3e8a3bf8b22334dd65b.scope: Deactivated successfully.
Oct 10 23:20:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:20:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:20:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:55 np0005480824 podman[94558]: 2025-10-11 03:20:55.124787976 +0000 UTC m=+0.047511172 container exec a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:55 np0005480824 podman[94558]: 2025-10-11 03:20:55.243050042 +0000 UTC m=+0.165773248 container exec_died a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:55 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev df638a1a-c5db-455f-afe1-9a8dd104dbef does not exist
Oct 10 23:20:55 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev b6e50968-2267-4406-a6fb-b033b1ba8119 does not exist
Oct 10 23:20:55 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev f20b215c-e0ee-4e40-820b-843480940308 does not exist
Oct 10 23:20:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:20:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:20:56 np0005480824 podman[94822]: 2025-10-11 03:20:56.527503035 +0000 UTC m=+0.062605514 container create 8ffbef1b0004ffd494dc2fee2ff73c6a5037ede20a4d8fc5cc710372e8906557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 23:20:56 np0005480824 systemd[1]: Started libpod-conmon-8ffbef1b0004ffd494dc2fee2ff73c6a5037ede20a4d8fc5cc710372e8906557.scope.
Oct 10 23:20:56 np0005480824 podman[94822]: 2025-10-11 03:20:56.498076337 +0000 UTC m=+0.033178876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:56 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:56 np0005480824 podman[94822]: 2025-10-11 03:20:56.646797704 +0000 UTC m=+0.181900233 container init 8ffbef1b0004ffd494dc2fee2ff73c6a5037ede20a4d8fc5cc710372e8906557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_torvalds, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 10 23:20:56 np0005480824 podman[94822]: 2025-10-11 03:20:56.659762668 +0000 UTC m=+0.194865147 container start 8ffbef1b0004ffd494dc2fee2ff73c6a5037ede20a4d8fc5cc710372e8906557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_torvalds, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:20:56 np0005480824 kind_torvalds[94839]: 167 167
Oct 10 23:20:56 np0005480824 podman[94822]: 2025-10-11 03:20:56.6650308 +0000 UTC m=+0.200133289 container attach 8ffbef1b0004ffd494dc2fee2ff73c6a5037ede20a4d8fc5cc710372e8906557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 10 23:20:56 np0005480824 systemd[1]: libpod-8ffbef1b0004ffd494dc2fee2ff73c6a5037ede20a4d8fc5cc710372e8906557.scope: Deactivated successfully.
Oct 10 23:20:56 np0005480824 podman[94822]: 2025-10-11 03:20:56.665762008 +0000 UTC m=+0.200864477 container died 8ffbef1b0004ffd494dc2fee2ff73c6a5037ede20a4d8fc5cc710372e8906557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 23:20:56 np0005480824 systemd[1]: var-lib-containers-storage-overlay-7b7f44954a18c57e492831e2168e4be7b06c7fac740496701a086eb109a52d01-merged.mount: Deactivated successfully.
Oct 10 23:20:56 np0005480824 podman[94822]: 2025-10-11 03:20:56.71588043 +0000 UTC m=+0.250982899 container remove 8ffbef1b0004ffd494dc2fee2ff73c6a5037ede20a4d8fc5cc710372e8906557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 10 23:20:56 np0005480824 systemd[1]: libpod-conmon-8ffbef1b0004ffd494dc2fee2ff73c6a5037ede20a4d8fc5cc710372e8906557.scope: Deactivated successfully.
Oct 10 23:20:56 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:56 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:56 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:20:56 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:20:56 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:20:56 np0005480824 podman[94863]: 2025-10-11 03:20:56.906770743 +0000 UTC m=+0.047147973 container create 9181bb746cc6ed3892c8ad8c464038cda8c8f4728c0ee23c847ff65daa20d608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 10 23:20:56 np0005480824 systemd[1]: Started libpod-conmon-9181bb746cc6ed3892c8ad8c464038cda8c8f4728c0ee23c847ff65daa20d608.scope.
Oct 10 23:20:56 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:56 np0005480824 podman[94863]: 2025-10-11 03:20:56.885837574 +0000 UTC m=+0.026214814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:56 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957ec41282835b8f7e003091dc20db5f94c05e9f58a686dba1ea7c727a4ffbc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:56 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957ec41282835b8f7e003091dc20db5f94c05e9f58a686dba1ea7c727a4ffbc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:56 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957ec41282835b8f7e003091dc20db5f94c05e9f58a686dba1ea7c727a4ffbc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:56 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957ec41282835b8f7e003091dc20db5f94c05e9f58a686dba1ea7c727a4ffbc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:56 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957ec41282835b8f7e003091dc20db5f94c05e9f58a686dba1ea7c727a4ffbc1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:57 np0005480824 podman[94863]: 2025-10-11 03:20:57.007969089 +0000 UTC m=+0.148346309 container init 9181bb746cc6ed3892c8ad8c464038cda8c8f4728c0ee23c847ff65daa20d608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_solomon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:20:57 np0005480824 podman[94863]: 2025-10-11 03:20:57.021576328 +0000 UTC m=+0.161953528 container start 9181bb746cc6ed3892c8ad8c464038cda8c8f4728c0ee23c847ff65daa20d608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:57 np0005480824 podman[94863]: 2025-10-11 03:20:57.025246654 +0000 UTC m=+0.165623864 container attach 9181bb746cc6ed3892c8ad8c464038cda8c8f4728c0ee23c847ff65daa20d608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_solomon, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 10 23:20:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 10 23:20:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:20:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:20:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:20:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:20:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:20:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:20:58 np0005480824 mystifying_solomon[94880]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:20:58 np0005480824 mystifying_solomon[94880]: --> relative data size: 1.0
Oct 10 23:20:58 np0005480824 mystifying_solomon[94880]: --> All data devices are unavailable
Oct 10 23:20:58 np0005480824 systemd[1]: libpod-9181bb746cc6ed3892c8ad8c464038cda8c8f4728c0ee23c847ff65daa20d608.scope: Deactivated successfully.
Oct 10 23:20:58 np0005480824 systemd[1]: libpod-9181bb746cc6ed3892c8ad8c464038cda8c8f4728c0ee23c847ff65daa20d608.scope: Consumed 1.016s CPU time.
Oct 10 23:20:58 np0005480824 podman[94909]: 2025-10-11 03:20:58.152070972 +0000 UTC m=+0.037774524 container died 9181bb746cc6ed3892c8ad8c464038cda8c8f4728c0ee23c847ff65daa20d608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_solomon, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:58 np0005480824 systemd[1]: var-lib-containers-storage-overlay-957ec41282835b8f7e003091dc20db5f94c05e9f58a686dba1ea7c727a4ffbc1-merged.mount: Deactivated successfully.
Oct 10 23:20:58 np0005480824 podman[94909]: 2025-10-11 03:20:58.507642335 +0000 UTC m=+0.393345897 container remove 9181bb746cc6ed3892c8ad8c464038cda8c8f4728c0ee23c847ff65daa20d608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:58 np0005480824 systemd[1]: libpod-conmon-9181bb746cc6ed3892c8ad8c464038cda8c8f4728c0ee23c847ff65daa20d608.scope: Deactivated successfully.
Oct 10 23:20:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:20:59 np0005480824 podman[95065]: 2025-10-11 03:20:59.128939783 +0000 UTC m=+0.039396302 container create a1c053c787fa55d89580da766b657a31da690ba6dc4f89820f895261d487046e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 10 23:20:59 np0005480824 systemd[1]: Started libpod-conmon-a1c053c787fa55d89580da766b657a31da690ba6dc4f89820f895261d487046e.scope.
Oct 10 23:20:59 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:59 np0005480824 podman[95065]: 2025-10-11 03:20:59.110463471 +0000 UTC m=+0.020920020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:59 np0005480824 podman[95065]: 2025-10-11 03:20:59.209545808 +0000 UTC m=+0.120002347 container init a1c053c787fa55d89580da766b657a31da690ba6dc4f89820f895261d487046e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nightingale, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:20:59 np0005480824 podman[95065]: 2025-10-11 03:20:59.220148416 +0000 UTC m=+0.130604925 container start a1c053c787fa55d89580da766b657a31da690ba6dc4f89820f895261d487046e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nightingale, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:20:59 np0005480824 podman[95065]: 2025-10-11 03:20:59.223501625 +0000 UTC m=+0.133958164 container attach a1c053c787fa55d89580da766b657a31da690ba6dc4f89820f895261d487046e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nightingale, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 23:20:59 np0005480824 nice_nightingale[95081]: 167 167
Oct 10 23:20:59 np0005480824 systemd[1]: libpod-a1c053c787fa55d89580da766b657a31da690ba6dc4f89820f895261d487046e.scope: Deactivated successfully.
Oct 10 23:20:59 np0005480824 podman[95065]: 2025-10-11 03:20:59.225113152 +0000 UTC m=+0.135569661 container died a1c053c787fa55d89580da766b657a31da690ba6dc4f89820f895261d487046e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nightingale, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:20:59 np0005480824 systemd[1]: var-lib-containers-storage-overlay-271ce97736112e1463480327034f83f78055049ee992b6c75b5631846a13efba-merged.mount: Deactivated successfully.
Oct 10 23:20:59 np0005480824 podman[95065]: 2025-10-11 03:20:59.265119717 +0000 UTC m=+0.175576226 container remove a1c053c787fa55d89580da766b657a31da690ba6dc4f89820f895261d487046e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 10 23:20:59 np0005480824 systemd[1]: libpod-conmon-a1c053c787fa55d89580da766b657a31da690ba6dc4f89820f895261d487046e.scope: Deactivated successfully.
Oct 10 23:20:59 np0005480824 podman[95106]: 2025-10-11 03:20:59.48465177 +0000 UTC m=+0.074754598 container create 4e6c980cbfaa6277b08457144cad0197d99890eee1607f17320945ab32e67a3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:20:59 np0005480824 systemd[1]: Started libpod-conmon-4e6c980cbfaa6277b08457144cad0197d99890eee1607f17320945ab32e67a3d.scope.
Oct 10 23:20:59 np0005480824 podman[95106]: 2025-10-11 03:20:59.455849618 +0000 UTC m=+0.045952486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:20:59 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:20:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e2b4dbc8f44133a4068af63b22afece55025a9a904636ff8c9b5e0b5feb817/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e2b4dbc8f44133a4068af63b22afece55025a9a904636ff8c9b5e0b5feb817/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e2b4dbc8f44133a4068af63b22afece55025a9a904636ff8c9b5e0b5feb817/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e2b4dbc8f44133a4068af63b22afece55025a9a904636ff8c9b5e0b5feb817/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:20:59 np0005480824 podman[95106]: 2025-10-11 03:20:59.59111454 +0000 UTC m=+0.181217348 container init 4e6c980cbfaa6277b08457144cad0197d99890eee1607f17320945ab32e67a3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:20:59 np0005480824 podman[95106]: 2025-10-11 03:20:59.599315322 +0000 UTC m=+0.189418150 container start 4e6c980cbfaa6277b08457144cad0197d99890eee1607f17320945ab32e67a3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 10 23:20:59 np0005480824 podman[95106]: 2025-10-11 03:20:59.6035079 +0000 UTC m=+0.193610738 container attach 4e6c980cbfaa6277b08457144cad0197d99890eee1607f17320945ab32e67a3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:20:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 10 23:21:00 np0005480824 sharp_easley[95122]: {
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:    "0": [
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:        {
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "devices": [
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "/dev/loop3"
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            ],
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_name": "ceph_lv0",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_size": "21470642176",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "name": "ceph_lv0",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "tags": {
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.cluster_name": "ceph",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.crush_device_class": "",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.encrypted": "0",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.osd_id": "0",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.type": "block",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.vdo": "0"
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            },
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "type": "block",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "vg_name": "ceph_vg0"
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:        }
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:    ],
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:    "1": [
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:        {
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "devices": [
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "/dev/loop4"
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            ],
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_name": "ceph_lv1",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_size": "21470642176",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "name": "ceph_lv1",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "tags": {
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.cluster_name": "ceph",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.crush_device_class": "",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.encrypted": "0",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.osd_id": "1",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.type": "block",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.vdo": "0"
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            },
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "type": "block",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "vg_name": "ceph_vg1"
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:        }
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:    ],
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:    "2": [
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:        {
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "devices": [
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "/dev/loop5"
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            ],
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_name": "ceph_lv2",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_size": "21470642176",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "name": "ceph_lv2",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "tags": {
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.cluster_name": "ceph",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.crush_device_class": "",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.encrypted": "0",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.osd_id": "2",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.type": "block",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:                "ceph.vdo": "0"
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            },
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "type": "block",
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:            "vg_name": "ceph_vg2"
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:        }
Oct 10 23:21:00 np0005480824 sharp_easley[95122]:    ]
Oct 10 23:21:00 np0005480824 sharp_easley[95122]: }
Oct 10 23:21:00 np0005480824 systemd[1]: libpod-4e6c980cbfaa6277b08457144cad0197d99890eee1607f17320945ab32e67a3d.scope: Deactivated successfully.
Oct 10 23:21:00 np0005480824 podman[95106]: 2025-10-11 03:21:00.301196813 +0000 UTC m=+0.891299591 container died 4e6c980cbfaa6277b08457144cad0197d99890eee1607f17320945ab32e67a3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:00 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e7e2b4dbc8f44133a4068af63b22afece55025a9a904636ff8c9b5e0b5feb817-merged.mount: Deactivated successfully.
Oct 10 23:21:00 np0005480824 podman[95106]: 2025-10-11 03:21:00.372953272 +0000 UTC m=+0.963056070 container remove 4e6c980cbfaa6277b08457144cad0197d99890eee1607f17320945ab32e67a3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 23:21:00 np0005480824 systemd[1]: libpod-conmon-4e6c980cbfaa6277b08457144cad0197d99890eee1607f17320945ab32e67a3d.scope: Deactivated successfully.
Oct 10 23:21:01 np0005480824 podman[95283]: 2025-10-11 03:21:01.131946609 +0000 UTC m=+0.037526779 container create 41d33384b2ca152150c46217994ccb4c410dd86e5d91486367aa1a1ca06b6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:01 np0005480824 systemd[1]: Started libpod-conmon-41d33384b2ca152150c46217994ccb4c410dd86e5d91486367aa1a1ca06b6a9a.scope.
Oct 10 23:21:01 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:01 np0005480824 podman[95283]: 2025-10-11 03:21:01.210400133 +0000 UTC m=+0.115980353 container init 41d33384b2ca152150c46217994ccb4c410dd86e5d91486367aa1a1ca06b6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:01 np0005480824 podman[95283]: 2025-10-11 03:21:01.116483097 +0000 UTC m=+0.022063277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:01 np0005480824 podman[95283]: 2025-10-11 03:21:01.216775332 +0000 UTC m=+0.122355512 container start 41d33384b2ca152150c46217994ccb4c410dd86e5d91486367aa1a1ca06b6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:01 np0005480824 podman[95283]: 2025-10-11 03:21:01.220005978 +0000 UTC m=+0.125586178 container attach 41d33384b2ca152150c46217994ccb4c410dd86e5d91486367aa1a1ca06b6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 10 23:21:01 np0005480824 romantic_chebyshev[95299]: 167 167
Oct 10 23:21:01 np0005480824 systemd[1]: libpod-41d33384b2ca152150c46217994ccb4c410dd86e5d91486367aa1a1ca06b6a9a.scope: Deactivated successfully.
Oct 10 23:21:01 np0005480824 podman[95283]: 2025-10-11 03:21:01.223572841 +0000 UTC m=+0.129153011 container died 41d33384b2ca152150c46217994ccb4c410dd86e5d91486367aa1a1ca06b6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 23:21:01 np0005480824 systemd[1]: var-lib-containers-storage-overlay-2342d0c0a09642a5ce4f130c0cc656f44c2aa5f4248078c1b1c84b972b76b9c3-merged.mount: Deactivated successfully.
Oct 10 23:21:01 np0005480824 podman[95283]: 2025-10-11 03:21:01.26457793 +0000 UTC m=+0.170158110 container remove 41d33384b2ca152150c46217994ccb4c410dd86e5d91486367aa1a1ca06b6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 23:21:01 np0005480824 systemd[1]: libpod-conmon-41d33384b2ca152150c46217994ccb4c410dd86e5d91486367aa1a1ca06b6a9a.scope: Deactivated successfully.
Oct 10 23:21:01 np0005480824 podman[95323]: 2025-10-11 03:21:01.475707966 +0000 UTC m=+0.065706197 container create f2b3b465a660cda0d5bbf75101632119b7809403c5fafc51ec7994fcea9f9e77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 10 23:21:01 np0005480824 systemd[1]: Started libpod-conmon-f2b3b465a660cda0d5bbf75101632119b7809403c5fafc51ec7994fcea9f9e77.scope.
Oct 10 23:21:01 np0005480824 podman[95323]: 2025-10-11 03:21:01.446477063 +0000 UTC m=+0.036475344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:01 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:01 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837c9fe8ae7d333ce8c16b7764d9c4df030448ef4f70311e957c10932663281a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:01 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837c9fe8ae7d333ce8c16b7764d9c4df030448ef4f70311e957c10932663281a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:01 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837c9fe8ae7d333ce8c16b7764d9c4df030448ef4f70311e957c10932663281a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:01 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837c9fe8ae7d333ce8c16b7764d9c4df030448ef4f70311e957c10932663281a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:01 np0005480824 podman[95323]: 2025-10-11 03:21:01.581983271 +0000 UTC m=+0.171981542 container init f2b3b465a660cda0d5bbf75101632119b7809403c5fafc51ec7994fcea9f9e77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elbakyan, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:01 np0005480824 podman[95323]: 2025-10-11 03:21:01.596193993 +0000 UTC m=+0.186192214 container start f2b3b465a660cda0d5bbf75101632119b7809403c5fafc51ec7994fcea9f9e77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elbakyan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 10 23:21:01 np0005480824 podman[95323]: 2025-10-11 03:21:01.600634508 +0000 UTC m=+0.190632789 container attach f2b3b465a660cda0d5bbf75101632119b7809403c5fafc51ec7994fcea9f9e77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 10 23:21:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]: {
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "osd_id": 0,
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "type": "bluestore"
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:    },
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "osd_id": 1,
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "type": "bluestore"
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:    },
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "osd_id": 2,
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:        "type": "bluestore"
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]:    }
Oct 10 23:21:02 np0005480824 friendly_elbakyan[95339]: }
Oct 10 23:21:02 np0005480824 systemd[1]: libpod-f2b3b465a660cda0d5bbf75101632119b7809403c5fafc51ec7994fcea9f9e77.scope: Deactivated successfully.
Oct 10 23:21:02 np0005480824 systemd[1]: libpod-f2b3b465a660cda0d5bbf75101632119b7809403c5fafc51ec7994fcea9f9e77.scope: Consumed 1.120s CPU time.
Oct 10 23:21:02 np0005480824 podman[95323]: 2025-10-11 03:21:02.706915135 +0000 UTC m=+1.296913326 container died f2b3b465a660cda0d5bbf75101632119b7809403c5fafc51ec7994fcea9f9e77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:02 np0005480824 systemd[1]: var-lib-containers-storage-overlay-837c9fe8ae7d333ce8c16b7764d9c4df030448ef4f70311e957c10932663281a-merged.mount: Deactivated successfully.
Oct 10 23:21:02 np0005480824 podman[95323]: 2025-10-11 03:21:02.762195957 +0000 UTC m=+1.352194148 container remove f2b3b465a660cda0d5bbf75101632119b7809403c5fafc51ec7994fcea9f9e77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:02 np0005480824 systemd[1]: libpod-conmon-f2b3b465a660cda0d5bbf75101632119b7809403c5fafc51ec7994fcea9f9e77.scope: Deactivated successfully.
Oct 10 23:21:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:21:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:21:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:02 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:02 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:21:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 10 23:21:04 np0005480824 python3[95461]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:04 np0005480824 podman[95463]: 2025-10-11 03:21:04.578922217 +0000 UTC m=+0.068687608 container create c7bca53d54ab5b8919d9b52a85fd87b3d8c784ffa3e1829a95a84cbec7232075 (image=quay.io/ceph/ceph:v18, name=nice_lichterman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 23:21:04 np0005480824 systemd[1]: Started libpod-conmon-c7bca53d54ab5b8919d9b52a85fd87b3d8c784ffa3e1829a95a84cbec7232075.scope.
Oct 10 23:21:04 np0005480824 podman[95463]: 2025-10-11 03:21:04.550446261 +0000 UTC m=+0.040211691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:04 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2847ec71d340fa405984f18b04f867f8b8c1290b959cb5115624f96e67dd7d91/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2847ec71d340fa405984f18b04f867f8b8c1290b959cb5115624f96e67dd7d91/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2847ec71d340fa405984f18b04f867f8b8c1290b959cb5115624f96e67dd7d91/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:04 np0005480824 podman[95463]: 2025-10-11 03:21:04.683921153 +0000 UTC m=+0.173686603 container init c7bca53d54ab5b8919d9b52a85fd87b3d8c784ffa3e1829a95a84cbec7232075 (image=quay.io/ceph/ceph:v18, name=nice_lichterman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 10 23:21:04 np0005480824 podman[95463]: 2025-10-11 03:21:04.6970799 +0000 UTC m=+0.186845290 container start c7bca53d54ab5b8919d9b52a85fd87b3d8c784ffa3e1829a95a84cbec7232075 (image=quay.io/ceph/ceph:v18, name=nice_lichterman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:04 np0005480824 podman[95463]: 2025-10-11 03:21:04.702028756 +0000 UTC m=+0.191794206 container attach c7bca53d54ab5b8919d9b52a85fd87b3d8c784ffa3e1829a95a84cbec7232075 (image=quay.io/ceph/ceph:v18, name=nice_lichterman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 10 23:21:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 10 23:21:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531981657' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 23:21:05 np0005480824 nice_lichterman[95479]: 
Oct 10 23:21:05 np0005480824 nice_lichterman[95479]: {"fsid":"92cfe4d4-4917-5be1-9d00-73758793a62b","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":146,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":17,"num_osds":3,"num_up_osds":3,"osd_up_since":1760152846,"num_in_osds":3,"osd_in_since":1760152815,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":922333184,"bytes_avail":63489593344,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-11T03:20:29.789535+0000","services":{}},"progress_events":{}}
Oct 10 23:21:05 np0005480824 systemd[1]: libpod-c7bca53d54ab5b8919d9b52a85fd87b3d8c784ffa3e1829a95a84cbec7232075.scope: Deactivated successfully.
Oct 10 23:21:05 np0005480824 podman[95463]: 2025-10-11 03:21:05.350753245 +0000 UTC m=+0.840518645 container died c7bca53d54ab5b8919d9b52a85fd87b3d8c784ffa3e1829a95a84cbec7232075 (image=quay.io/ceph/ceph:v18, name=nice_lichterman, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 10 23:21:05 np0005480824 systemd[1]: var-lib-containers-storage-overlay-2847ec71d340fa405984f18b04f867f8b8c1290b959cb5115624f96e67dd7d91-merged.mount: Deactivated successfully.
Oct 10 23:21:05 np0005480824 podman[95463]: 2025-10-11 03:21:05.408917254 +0000 UTC m=+0.898682654 container remove c7bca53d54ab5b8919d9b52a85fd87b3d8c784ffa3e1829a95a84cbec7232075 (image=quay.io/ceph/ceph:v18, name=nice_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 10 23:21:05 np0005480824 systemd[1]: libpod-conmon-c7bca53d54ab5b8919d9b52a85fd87b3d8c784ffa3e1829a95a84cbec7232075.scope: Deactivated successfully.
Oct 10 23:21:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct 10 23:21:05 np0005480824 python3[95541]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:06 np0005480824 podman[95542]: 2025-10-11 03:21:06.066019319 +0000 UTC m=+0.069526817 container create 138b123f8f7f500975eddc4a2c447f40156716da6576e974178c6c6c909ba97c (image=quay.io/ceph/ceph:v18, name=stoic_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 23:21:06 np0005480824 systemd[1]: Started libpod-conmon-138b123f8f7f500975eddc4a2c447f40156716da6576e974178c6c6c909ba97c.scope.
Oct 10 23:21:06 np0005480824 podman[95542]: 2025-10-11 03:21:06.033922969 +0000 UTC m=+0.037430527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:06 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:06 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d18176c611113de35ed2bb4fa03297ed7a86486e5271170319316e70c87849b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:06 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d18176c611113de35ed2bb4fa03297ed7a86486e5271170319316e70c87849b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:06 np0005480824 podman[95542]: 2025-10-11 03:21:06.171368533 +0000 UTC m=+0.174876071 container init 138b123f8f7f500975eddc4a2c447f40156716da6576e974178c6c6c909ba97c (image=quay.io/ceph/ceph:v18, name=stoic_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:06 np0005480824 podman[95542]: 2025-10-11 03:21:06.182200976 +0000 UTC m=+0.185708464 container start 138b123f8f7f500975eddc4a2c447f40156716da6576e974178c6c6c909ba97c (image=quay.io/ceph/ceph:v18, name=stoic_cerf, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:21:06 np0005480824 podman[95542]: 2025-10-11 03:21:06.186807814 +0000 UTC m=+0.190315352 container attach 138b123f8f7f500975eddc4a2c447f40156716da6576e974178c6c6c909ba97c (image=quay.io/ceph/ceph:v18, name=stoic_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 10 23:21:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1651110120' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 23:21:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct 10 23:21:06 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1651110120' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 23:21:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1651110120' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 23:21:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Oct 10 23:21:06 np0005480824 stoic_cerf[95557]: pool 'vms' created
Oct 10 23:21:06 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Oct 10 23:21:06 np0005480824 systemd[1]: libpod-138b123f8f7f500975eddc4a2c447f40156716da6576e974178c6c6c909ba97c.scope: Deactivated successfully.
Oct 10 23:21:06 np0005480824 podman[95542]: 2025-10-11 03:21:06.967676413 +0000 UTC m=+0.971183911 container died 138b123f8f7f500975eddc4a2c447f40156716da6576e974178c6c6c909ba97c (image=quay.io/ceph/ceph:v18, name=stoic_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:06 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:07 np0005480824 systemd[1]: var-lib-containers-storage-overlay-1d18176c611113de35ed2bb4fa03297ed7a86486e5271170319316e70c87849b-merged.mount: Deactivated successfully.
Oct 10 23:21:07 np0005480824 podman[95542]: 2025-10-11 03:21:07.033487441 +0000 UTC m=+1.036994909 container remove 138b123f8f7f500975eddc4a2c447f40156716da6576e974178c6c6c909ba97c (image=quay.io/ceph/ceph:v18, name=stoic_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:07 np0005480824 systemd[1]: libpod-conmon-138b123f8f7f500975eddc4a2c447f40156716da6576e974178c6c6c909ba97c.scope: Deactivated successfully.
Oct 10 23:21:07 np0005480824 python3[95620]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:07 np0005480824 podman[95621]: 2025-10-11 03:21:07.482833088 +0000 UTC m=+0.065046632 container create 41a14ce0151dfd45b166e2601da19f98444a060d5cc0f07047a176ae0c95ff57 (image=quay.io/ceph/ceph:v18, name=interesting_lamport, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:07 np0005480824 systemd[1]: Started libpod-conmon-41a14ce0151dfd45b166e2601da19f98444a060d5cc0f07047a176ae0c95ff57.scope.
Oct 10 23:21:07 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:07 np0005480824 podman[95621]: 2025-10-11 03:21:07.461689684 +0000 UTC m=+0.043903258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:07 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491bd251398d7969e48e0668ec6f17316bb5d889b245811cbc8aece8e1792465/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:07 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491bd251398d7969e48e0668ec6f17316bb5d889b245811cbc8aece8e1792465/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:07 np0005480824 podman[95621]: 2025-10-11 03:21:07.58001185 +0000 UTC m=+0.162225484 container init 41a14ce0151dfd45b166e2601da19f98444a060d5cc0f07047a176ae0c95ff57 (image=quay.io/ceph/ceph:v18, name=interesting_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:07 np0005480824 podman[95621]: 2025-10-11 03:21:07.589545533 +0000 UTC m=+0.171759107 container start 41a14ce0151dfd45b166e2601da19f98444a060d5cc0f07047a176ae0c95ff57 (image=quay.io/ceph/ceph:v18, name=interesting_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 10 23:21:07 np0005480824 podman[95621]: 2025-10-11 03:21:07.59494371 +0000 UTC m=+0.177157254 container attach 41a14ce0151dfd45b166e2601da19f98444a060d5cc0f07047a176ae0c95ff57 (image=quay.io/ceph/ceph:v18, name=interesting_lamport, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v59: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:07 np0005480824 ceph-mon[74326]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 23:21:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct 10 23:21:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Oct 10 23:21:07 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1651110120' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 23:21:07 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Oct 10 23:21:07 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 10 23:21:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2422149761' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 23:21:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:21:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct 10 23:21:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2422149761' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 23:21:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Oct 10 23:21:08 np0005480824 interesting_lamport[95636]: pool 'volumes' created
Oct 10 23:21:08 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Oct 10 23:21:08 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:08 np0005480824 ceph-mon[74326]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 23:21:08 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/2422149761' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 23:21:08 np0005480824 systemd[1]: libpod-41a14ce0151dfd45b166e2601da19f98444a060d5cc0f07047a176ae0c95ff57.scope: Deactivated successfully.
Oct 10 23:21:08 np0005480824 podman[95621]: 2025-10-11 03:21:08.990781538 +0000 UTC m=+1.572995112 container died 41a14ce0151dfd45b166e2601da19f98444a060d5cc0f07047a176ae0c95ff57 (image=quay.io/ceph/ceph:v18, name=interesting_lamport, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 23:21:09 np0005480824 systemd[1]: var-lib-containers-storage-overlay-491bd251398d7969e48e0668ec6f17316bb5d889b245811cbc8aece8e1792465-merged.mount: Deactivated successfully.
Oct 10 23:21:09 np0005480824 podman[95621]: 2025-10-11 03:21:09.046799098 +0000 UTC m=+1.629012642 container remove 41a14ce0151dfd45b166e2601da19f98444a060d5cc0f07047a176ae0c95ff57 (image=quay.io/ceph/ceph:v18, name=interesting_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 23:21:09 np0005480824 systemd[1]: libpod-conmon-41a14ce0151dfd45b166e2601da19f98444a060d5cc0f07047a176ae0c95ff57.scope: Deactivated successfully.
Oct 10 23:21:09 np0005480824 python3[95699]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:09 np0005480824 podman[95700]: 2025-10-11 03:21:09.412532489 +0000 UTC m=+0.051043525 container create 69221dbad465131b08e294b2e235719a21f800e1beea1ac4806602d9bc17dfa2 (image=quay.io/ceph/ceph:v18, name=beautiful_cray, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 23:21:09 np0005480824 systemd[1]: Started libpod-conmon-69221dbad465131b08e294b2e235719a21f800e1beea1ac4806602d9bc17dfa2.scope.
Oct 10 23:21:09 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:09 np0005480824 podman[95700]: 2025-10-11 03:21:09.397236521 +0000 UTC m=+0.035747587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53eff9689b0ab216bad4c943a65bacfcb1bf9d917a22431105c2b4c5daa38952/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53eff9689b0ab216bad4c943a65bacfcb1bf9d917a22431105c2b4c5daa38952/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:09 np0005480824 podman[95700]: 2025-10-11 03:21:09.505154325 +0000 UTC m=+0.143665371 container init 69221dbad465131b08e294b2e235719a21f800e1beea1ac4806602d9bc17dfa2 (image=quay.io/ceph/ceph:v18, name=beautiful_cray, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 23:21:09 np0005480824 podman[95700]: 2025-10-11 03:21:09.510571922 +0000 UTC m=+0.149083008 container start 69221dbad465131b08e294b2e235719a21f800e1beea1ac4806602d9bc17dfa2 (image=quay.io/ceph/ceph:v18, name=beautiful_cray, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 23:21:09 np0005480824 podman[95700]: 2025-10-11 03:21:09.514952014 +0000 UTC m=+0.153463080 container attach 69221dbad465131b08e294b2e235719a21f800e1beea1ac4806602d9bc17dfa2 (image=quay.io/ceph/ceph:v18, name=beautiful_cray, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 23:21:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v62: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct 10 23:21:09 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/2422149761' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 23:21:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Oct 10 23:21:09 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Oct 10 23:21:09 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 10 23:21:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/868653790' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 23:21:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct 10 23:21:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/868653790' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 23:21:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Oct 10 23:21:10 np0005480824 beautiful_cray[95716]: pool 'backups' created
Oct 10 23:21:10 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Oct 10 23:21:11 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/868653790' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 23:21:11 np0005480824 systemd[1]: libpod-69221dbad465131b08e294b2e235719a21f800e1beea1ac4806602d9bc17dfa2.scope: Deactivated successfully.
Oct 10 23:21:11 np0005480824 podman[95700]: 2025-10-11 03:21:11.019551046 +0000 UTC m=+1.658062102 container died 69221dbad465131b08e294b2e235719a21f800e1beea1ac4806602d9bc17dfa2 (image=quay.io/ceph/ceph:v18, name=beautiful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:11 np0005480824 systemd[1]: var-lib-containers-storage-overlay-53eff9689b0ab216bad4c943a65bacfcb1bf9d917a22431105c2b4c5daa38952-merged.mount: Deactivated successfully.
Oct 10 23:21:11 np0005480824 podman[95700]: 2025-10-11 03:21:11.075388771 +0000 UTC m=+1.713899827 container remove 69221dbad465131b08e294b2e235719a21f800e1beea1ac4806602d9bc17dfa2 (image=quay.io/ceph/ceph:v18, name=beautiful_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:11 np0005480824 systemd[1]: libpod-conmon-69221dbad465131b08e294b2e235719a21f800e1beea1ac4806602d9bc17dfa2.scope: Deactivated successfully.
Oct 10 23:21:11 np0005480824 python3[95781]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:11 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:11 np0005480824 podman[95782]: 2025-10-11 03:21:11.489873473 +0000 UTC m=+0.029084291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:11 np0005480824 podman[95782]: 2025-10-11 03:21:11.646390662 +0000 UTC m=+0.185601480 container create a0b739b05954748dbf5c56032cd35b836f5c815ad8f7b2f169721782bb6db307 (image=quay.io/ceph/ceph:v18, name=adoring_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:11 np0005480824 systemd[1]: Started libpod-conmon-a0b739b05954748dbf5c56032cd35b836f5c815ad8f7b2f169721782bb6db307.scope.
Oct 10 23:21:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v65: 4 pgs: 1 unknown, 2 creating+peering, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:11 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:11 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/313a40216fc1a6f443dd7de5e6aa1a6efd2af5a26f46dc708f620809974c21a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:11 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/313a40216fc1a6f443dd7de5e6aa1a6efd2af5a26f46dc708f620809974c21a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:11 np0005480824 podman[95782]: 2025-10-11 03:21:11.877998838 +0000 UTC m=+0.417209746 container init a0b739b05954748dbf5c56032cd35b836f5c815ad8f7b2f169721782bb6db307 (image=quay.io/ceph/ceph:v18, name=adoring_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:11 np0005480824 podman[95782]: 2025-10-11 03:21:11.885188176 +0000 UTC m=+0.424399004 container start a0b739b05954748dbf5c56032cd35b836f5c815ad8f7b2f169721782bb6db307 (image=quay.io/ceph/ceph:v18, name=adoring_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 10 23:21:11 np0005480824 podman[95782]: 2025-10-11 03:21:11.996757585 +0000 UTC m=+0.535968423 container attach a0b739b05954748dbf5c56032cd35b836f5c815ad8f7b2f169721782bb6db307 (image=quay.io/ceph/ceph:v18, name=adoring_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 23:21:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct 10 23:21:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Oct 10 23:21:12 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Oct 10 23:21:12 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/868653790' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 23:21:12 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 10 23:21:12 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1089538030' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 23:21:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct 10 23:21:13 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1089538030' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 23:21:13 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1089538030' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 23:21:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Oct 10 23:21:13 np0005480824 adoring_grothendieck[95797]: pool 'images' created
Oct 10 23:21:13 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Oct 10 23:21:13 np0005480824 systemd[1]: libpod-a0b739b05954748dbf5c56032cd35b836f5c815ad8f7b2f169721782bb6db307.scope: Deactivated successfully.
Oct 10 23:21:13 np0005480824 podman[95782]: 2025-10-11 03:21:13.462250753 +0000 UTC m=+2.001461611 container died a0b739b05954748dbf5c56032cd35b836f5c815ad8f7b2f169721782bb6db307 (image=quay.io/ceph/ceph:v18, name=adoring_grothendieck, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:21:13 np0005480824 systemd[1]: var-lib-containers-storage-overlay-313a40216fc1a6f443dd7de5e6aa1a6efd2af5a26f46dc708f620809974c21a7-merged.mount: Deactivated successfully.
Oct 10 23:21:13 np0005480824 podman[95782]: 2025-10-11 03:21:13.508871593 +0000 UTC m=+2.048082401 container remove a0b739b05954748dbf5c56032cd35b836f5c815ad8f7b2f169721782bb6db307 (image=quay.io/ceph/ceph:v18, name=adoring_grothendieck, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:13 np0005480824 systemd[1]: libpod-conmon-a0b739b05954748dbf5c56032cd35b836f5c815ad8f7b2f169721782bb6db307.scope: Deactivated successfully.
Oct 10 23:21:13 np0005480824 ceph-mon[74326]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 23:21:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:21:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v68: 5 pgs: 1 unknown, 3 active+clean, 1 creating+peering; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:13 np0005480824 python3[95862]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:13 np0005480824 podman[95863]: 2025-10-11 03:21:13.928556464 +0000 UTC m=+0.049416936 container create a4a72ebde15ae05b15ce8691054b6d1899da43557f078aee52c98227d32e3f9f (image=quay.io/ceph/ceph:v18, name=frosty_bohr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 23:21:13 np0005480824 systemd[1]: Started libpod-conmon-a4a72ebde15ae05b15ce8691054b6d1899da43557f078aee52c98227d32e3f9f.scope.
Oct 10 23:21:13 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:13 np0005480824 podman[95863]: 2025-10-11 03:21:13.903275813 +0000 UTC m=+0.024136275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:14 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca740a68ff0a4562a3e0045a632dd3a6dae8d2ab2a1cea74bc9415cc4ab4382/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:14 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca740a68ff0a4562a3e0045a632dd3a6dae8d2ab2a1cea74bc9415cc4ab4382/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:14 np0005480824 podman[95863]: 2025-10-11 03:21:14.017775101 +0000 UTC m=+0.138635593 container init a4a72ebde15ae05b15ce8691054b6d1899da43557f078aee52c98227d32e3f9f (image=quay.io/ceph/ceph:v18, name=frosty_bohr, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 23:21:14 np0005480824 podman[95863]: 2025-10-11 03:21:14.029573186 +0000 UTC m=+0.150433638 container start a4a72ebde15ae05b15ce8691054b6d1899da43557f078aee52c98227d32e3f9f (image=quay.io/ceph/ceph:v18, name=frosty_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 23:21:14 np0005480824 podman[95863]: 2025-10-11 03:21:14.033910699 +0000 UTC m=+0.154771201 container attach a4a72ebde15ae05b15ce8691054b6d1899da43557f078aee52c98227d32e3f9f (image=quay.io/ceph/ceph:v18, name=frosty_bohr, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 10 23:21:14 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct 10 23:21:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Oct 10 23:21:14 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1089538030' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 23:21:14 np0005480824 ceph-mon[74326]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 23:21:14 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Oct 10 23:21:14 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 10 23:21:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1997249421' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 23:21:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct 10 23:21:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1997249421' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 23:21:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Oct 10 23:21:15 np0005480824 frosty_bohr[95878]: pool 'cephfs.cephfs.meta' created
Oct 10 23:21:15 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Oct 10 23:21:15 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:15 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1997249421' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 23:21:15 np0005480824 systemd[1]: libpod-a4a72ebde15ae05b15ce8691054b6d1899da43557f078aee52c98227d32e3f9f.scope: Deactivated successfully.
Oct 10 23:21:15 np0005480824 podman[95863]: 2025-10-11 03:21:15.498490563 +0000 UTC m=+1.619351045 container died a4a72ebde15ae05b15ce8691054b6d1899da43557f078aee52c98227d32e3f9f (image=quay.io/ceph/ceph:v18, name=frosty_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 10 23:21:15 np0005480824 systemd[1]: var-lib-containers-storage-overlay-1ca740a68ff0a4562a3e0045a632dd3a6dae8d2ab2a1cea74bc9415cc4ab4382-merged.mount: Deactivated successfully.
Oct 10 23:21:15 np0005480824 podman[95863]: 2025-10-11 03:21:15.557503203 +0000 UTC m=+1.678363655 container remove a4a72ebde15ae05b15ce8691054b6d1899da43557f078aee52c98227d32e3f9f (image=quay.io/ceph/ceph:v18, name=frosty_bohr, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:21:15 np0005480824 systemd[1]: libpod-conmon-a4a72ebde15ae05b15ce8691054b6d1899da43557f078aee52c98227d32e3f9f.scope: Deactivated successfully.
Oct 10 23:21:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v71: 6 pgs: 2 unknown, 3 active+clean, 1 creating+peering; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:15 np0005480824 python3[95943]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:15 np0005480824 podman[95944]: 2025-10-11 03:21:15.970676344 +0000 UTC m=+0.052121729 container create e41103d31fd5c537dc24ed17b68d3500736fa819fa3fd5733ae6634ad4a636fc (image=quay.io/ceph/ceph:v18, name=sad_yalow, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:21:16 np0005480824 systemd[1]: Started libpod-conmon-e41103d31fd5c537dc24ed17b68d3500736fa819fa3fd5733ae6634ad4a636fc.scope.
Oct 10 23:21:16 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:16 np0005480824 podman[95944]: 2025-10-11 03:21:15.943306074 +0000 UTC m=+0.024751519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:16 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca001efe43269cbea958cd05f484f68af1ff5e386cf368a4c74fe82bfd90cd8f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:16 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca001efe43269cbea958cd05f484f68af1ff5e386cf368a4c74fe82bfd90cd8f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:16 np0005480824 podman[95944]: 2025-10-11 03:21:16.05390577 +0000 UTC m=+0.135351175 container init e41103d31fd5c537dc24ed17b68d3500736fa819fa3fd5733ae6634ad4a636fc (image=quay.io/ceph/ceph:v18, name=sad_yalow, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 10 23:21:16 np0005480824 podman[95944]: 2025-10-11 03:21:16.064767824 +0000 UTC m=+0.146213209 container start e41103d31fd5c537dc24ed17b68d3500736fa819fa3fd5733ae6634ad4a636fc (image=quay.io/ceph/ceph:v18, name=sad_yalow, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:16 np0005480824 podman[95944]: 2025-10-11 03:21:16.069089545 +0000 UTC m=+0.150534980 container attach e41103d31fd5c537dc24ed17b68d3500736fa819fa3fd5733ae6634ad4a636fc (image=quay.io/ceph/ceph:v18, name=sad_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 23:21:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct 10 23:21:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Oct 10 23:21:16 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Oct 10 23:21:16 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1997249421' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 23:21:16 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 10 23:21:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3783808586' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 23:21:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct 10 23:21:17 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3783808586' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 23:21:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Oct 10 23:21:17 np0005480824 sad_yalow[95959]: pool 'cephfs.cephfs.data' created
Oct 10 23:21:17 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Oct 10 23:21:17 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:17 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3783808586' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 23:21:17 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3783808586' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 23:21:17 np0005480824 systemd[1]: libpod-e41103d31fd5c537dc24ed17b68d3500736fa819fa3fd5733ae6634ad4a636fc.scope: Deactivated successfully.
Oct 10 23:21:17 np0005480824 podman[95944]: 2025-10-11 03:21:17.53108543 +0000 UTC m=+1.612530825 container died e41103d31fd5c537dc24ed17b68d3500736fa819fa3fd5733ae6634ad4a636fc (image=quay.io/ceph/ceph:v18, name=sad_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:17 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ca001efe43269cbea958cd05f484f68af1ff5e386cf368a4c74fe82bfd90cd8f-merged.mount: Deactivated successfully.
Oct 10 23:21:17 np0005480824 podman[95944]: 2025-10-11 03:21:17.588113464 +0000 UTC m=+1.669558859 container remove e41103d31fd5c537dc24ed17b68d3500736fa819fa3fd5733ae6634ad4a636fc (image=quay.io/ceph/ceph:v18, name=sad_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:21:17 np0005480824 systemd[1]: libpod-conmon-e41103d31fd5c537dc24ed17b68d3500736fa819fa3fd5733ae6634ad4a636fc.scope: Deactivated successfully.
Oct 10 23:21:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:17 np0005480824 python3[96022]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:17 np0005480824 podman[96023]: 2025-10-11 03:21:17.996438672 +0000 UTC m=+0.050811460 container create fc268b8d3444fadf0c52eb4646699aebdba9b6bb263281e763658efa52950e4b (image=quay.io/ceph/ceph:v18, name=eloquent_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:18 np0005480824 systemd[1]: Started libpod-conmon-fc268b8d3444fadf0c52eb4646699aebdba9b6bb263281e763658efa52950e4b.scope.
Oct 10 23:21:18 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:18 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580e7f2d3b962b9f7e0320912635acdb87d78f0464d05fb3eea3fab8e5e7f1c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:18 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580e7f2d3b962b9f7e0320912635acdb87d78f0464d05fb3eea3fab8e5e7f1c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:18 np0005480824 podman[96023]: 2025-10-11 03:21:17.969936842 +0000 UTC m=+0.024309680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:18 np0005480824 podman[96023]: 2025-10-11 03:21:18.082711298 +0000 UTC m=+0.137084066 container init fc268b8d3444fadf0c52eb4646699aebdba9b6bb263281e763658efa52950e4b (image=quay.io/ceph/ceph:v18, name=eloquent_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 10 23:21:18 np0005480824 podman[96023]: 2025-10-11 03:21:18.088115505 +0000 UTC m=+0.142488293 container start fc268b8d3444fadf0c52eb4646699aebdba9b6bb263281e763658efa52950e4b (image=quay.io/ceph/ceph:v18, name=eloquent_poincare, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:18 np0005480824 podman[96023]: 2025-10-11 03:21:18.092058557 +0000 UTC m=+0.146431325 container attach fc268b8d3444fadf0c52eb4646699aebdba9b6bb263281e763658efa52950e4b (image=quay.io/ceph/ceph:v18, name=eloquent_poincare, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:21:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct 10 23:21:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Oct 10 23:21:18 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Oct 10 23:21:18 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct 10 23:21:18 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3383072004' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 10 23:21:18 np0005480824 ceph-mon[74326]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 23:21:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:21:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct 10 23:21:19 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3383072004' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 10 23:21:19 np0005480824 ceph-mon[74326]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 23:21:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3383072004' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 10 23:21:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Oct 10 23:21:19 np0005480824 eloquent_poincare[96038]: enabled application 'rbd' on pool 'vms'
Oct 10 23:21:19 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Oct 10 23:21:19 np0005480824 systemd[1]: libpod-fc268b8d3444fadf0c52eb4646699aebdba9b6bb263281e763658efa52950e4b.scope: Deactivated successfully.
Oct 10 23:21:19 np0005480824 podman[96023]: 2025-10-11 03:21:19.544657542 +0000 UTC m=+1.599030300 container died fc268b8d3444fadf0c52eb4646699aebdba9b6bb263281e763658efa52950e4b (image=quay.io/ceph/ceph:v18, name=eloquent_poincare, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 23:21:19 np0005480824 systemd[1]: var-lib-containers-storage-overlay-580e7f2d3b962b9f7e0320912635acdb87d78f0464d05fb3eea3fab8e5e7f1c2-merged.mount: Deactivated successfully.
Oct 10 23:21:19 np0005480824 podman[96023]: 2025-10-11 03:21:19.593580596 +0000 UTC m=+1.647953354 container remove fc268b8d3444fadf0c52eb4646699aebdba9b6bb263281e763658efa52950e4b (image=quay.io/ceph/ceph:v18, name=eloquent_poincare, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 10 23:21:19 np0005480824 systemd[1]: libpod-conmon-fc268b8d3444fadf0c52eb4646699aebdba9b6bb263281e763658efa52950e4b.scope: Deactivated successfully.
Oct 10 23:21:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:19 np0005480824 python3[96102]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:19 np0005480824 podman[96103]: 2025-10-11 03:21:19.984731833 +0000 UTC m=+0.056765169 container create 3ca214b509cdff40687e674d2c6a9ebb71a737509680fad8b6c15f7b5fe8612d (image=quay.io/ceph/ceph:v18, name=suspicious_ardinghelli, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 23:21:20 np0005480824 systemd[1]: Started libpod-conmon-3ca214b509cdff40687e674d2c6a9ebb71a737509680fad8b6c15f7b5fe8612d.scope.
Oct 10 23:21:20 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:20 np0005480824 podman[96103]: 2025-10-11 03:21:19.967853618 +0000 UTC m=+0.039886974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f5fec19c9bb9758489d258a62bfb4a44b51cd9b6d1358c87ac8798fad9548c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f5fec19c9bb9758489d258a62bfb4a44b51cd9b6d1358c87ac8798fad9548c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:20 np0005480824 podman[96103]: 2025-10-11 03:21:20.080737447 +0000 UTC m=+0.152770873 container init 3ca214b509cdff40687e674d2c6a9ebb71a737509680fad8b6c15f7b5fe8612d (image=quay.io/ceph/ceph:v18, name=suspicious_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 10 23:21:20 np0005480824 podman[96103]: 2025-10-11 03:21:20.086284347 +0000 UTC m=+0.158317693 container start 3ca214b509cdff40687e674d2c6a9ebb71a737509680fad8b6c15f7b5fe8612d (image=quay.io/ceph/ceph:v18, name=suspicious_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 10 23:21:20 np0005480824 podman[96103]: 2025-10-11 03:21:20.089958043 +0000 UTC m=+0.161991409 container attach 3ca214b509cdff40687e674d2c6a9ebb71a737509680fad8b6c15f7b5fe8612d (image=quay.io/ceph/ceph:v18, name=suspicious_ardinghelli, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:20 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3383072004' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 10 23:21:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct 10 23:21:20 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1753955698' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 10 23:21:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct 10 23:21:21 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1753955698' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 10 23:21:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Oct 10 23:21:21 np0005480824 suspicious_ardinghelli[96118]: enabled application 'rbd' on pool 'volumes'
Oct 10 23:21:21 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Oct 10 23:21:21 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1753955698' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 10 23:21:21 np0005480824 systemd[1]: libpod-3ca214b509cdff40687e674d2c6a9ebb71a737509680fad8b6c15f7b5fe8612d.scope: Deactivated successfully.
Oct 10 23:21:21 np0005480824 podman[96103]: 2025-10-11 03:21:21.576049941 +0000 UTC m=+1.648083277 container died 3ca214b509cdff40687e674d2c6a9ebb71a737509680fad8b6c15f7b5fe8612d (image=quay.io/ceph/ceph:v18, name=suspicious_ardinghelli, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:21 np0005480824 systemd[1]: var-lib-containers-storage-overlay-64f5fec19c9bb9758489d258a62bfb4a44b51cd9b6d1358c87ac8798fad9548c-merged.mount: Deactivated successfully.
Oct 10 23:21:21 np0005480824 podman[96103]: 2025-10-11 03:21:21.623249105 +0000 UTC m=+1.695282441 container remove 3ca214b509cdff40687e674d2c6a9ebb71a737509680fad8b6c15f7b5fe8612d (image=quay.io/ceph/ceph:v18, name=suspicious_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:21:21 np0005480824 systemd[1]: libpod-conmon-3ca214b509cdff40687e674d2c6a9ebb71a737509680fad8b6c15f7b5fe8612d.scope: Deactivated successfully.
Oct 10 23:21:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:21 np0005480824 python3[96179]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:21 np0005480824 podman[96180]: 2025-10-11 03:21:21.961252268 +0000 UTC m=+0.043527498 container create 0db06cee8c4f4032a0ec155aaef9dc580a1760ca8e6e749516b9b57b4204bb2e (image=quay.io/ceph/ceph:v18, name=competent_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 23:21:21 np0005480824 systemd[1]: Started libpod-conmon-0db06cee8c4f4032a0ec155aaef9dc580a1760ca8e6e749516b9b57b4204bb2e.scope.
Oct 10 23:21:22 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2bcde433aefcbc574ef003ba6f0898518de2ac1280b8c35c1da91719c8e2ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2bcde433aefcbc574ef003ba6f0898518de2ac1280b8c35c1da91719c8e2ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:22 np0005480824 podman[96180]: 2025-10-11 03:21:21.937569804 +0000 UTC m=+0.019845034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:22 np0005480824 podman[96180]: 2025-10-11 03:21:22.045416117 +0000 UTC m=+0.127691327 container init 0db06cee8c4f4032a0ec155aaef9dc580a1760ca8e6e749516b9b57b4204bb2e (image=quay.io/ceph/ceph:v18, name=competent_noether, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:22 np0005480824 podman[96180]: 2025-10-11 03:21:22.052080492 +0000 UTC m=+0.134355732 container start 0db06cee8c4f4032a0ec155aaef9dc580a1760ca8e6e749516b9b57b4204bb2e (image=quay.io/ceph/ceph:v18, name=competent_noether, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:22 np0005480824 podman[96180]: 2025-10-11 03:21:22.056743282 +0000 UTC m=+0.139018512 container attach 0db06cee8c4f4032a0ec155aaef9dc580a1760ca8e6e749516b9b57b4204bb2e (image=quay.io/ceph/ceph:v18, name=competent_noether, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:21:22 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1753955698' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 10 23:21:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct 10 23:21:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2559142210' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 10 23:21:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct 10 23:21:23 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/2559142210' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 10 23:21:23 np0005480824 ceph-mon[74326]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 23:21:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2559142210' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 10 23:21:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct 10 23:21:23 np0005480824 competent_noether[96195]: enabled application 'rbd' on pool 'backups'
Oct 10 23:21:23 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct 10 23:21:23 np0005480824 systemd[1]: libpod-0db06cee8c4f4032a0ec155aaef9dc580a1760ca8e6e749516b9b57b4204bb2e.scope: Deactivated successfully.
Oct 10 23:21:23 np0005480824 podman[96180]: 2025-10-11 03:21:23.610780729 +0000 UTC m=+1.693055949 container died 0db06cee8c4f4032a0ec155aaef9dc580a1760ca8e6e749516b9b57b4204bb2e (image=quay.io/ceph/ceph:v18, name=competent_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:21:23 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0b2bcde433aefcbc574ef003ba6f0898518de2ac1280b8c35c1da91719c8e2ba-merged.mount: Deactivated successfully.
Oct 10 23:21:23 np0005480824 podman[96180]: 2025-10-11 03:21:23.661677699 +0000 UTC m=+1.743952919 container remove 0db06cee8c4f4032a0ec155aaef9dc580a1760ca8e6e749516b9b57b4204bb2e (image=quay.io/ceph/ceph:v18, name=competent_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:23 np0005480824 systemd[1]: libpod-conmon-0db06cee8c4f4032a0ec155aaef9dc580a1760ca8e6e749516b9b57b4204bb2e.scope: Deactivated successfully.
Oct 10 23:21:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:23 np0005480824 python3[96259]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:24 np0005480824 podman[96260]: 2025-10-11 03:21:24.068929652 +0000 UTC m=+0.063580509 container create b107f32d7cb8018a363c7b849eab8a770632af2469ea9fa174c7b531235cc204 (image=quay.io/ceph/ceph:v18, name=stoic_wright, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 23:21:24 np0005480824 systemd[1]: Started libpod-conmon-b107f32d7cb8018a363c7b849eab8a770632af2469ea9fa174c7b531235cc204.scope.
Oct 10 23:21:24 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6fd82e91f0329a7a6b8e418d6d40875597aa837b33fa09cad0e8f0be7b4ca3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6fd82e91f0329a7a6b8e418d6d40875597aa837b33fa09cad0e8f0be7b4ca3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:24 np0005480824 podman[96260]: 2025-10-11 03:21:24.048825481 +0000 UTC m=+0.043476328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:24 np0005480824 podman[96260]: 2025-10-11 03:21:24.16168679 +0000 UTC m=+0.156337697 container init b107f32d7cb8018a363c7b849eab8a770632af2469ea9fa174c7b531235cc204 (image=quay.io/ceph/ceph:v18, name=stoic_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:24 np0005480824 podman[96260]: 2025-10-11 03:21:24.166735559 +0000 UTC m=+0.161386426 container start b107f32d7cb8018a363c7b849eab8a770632af2469ea9fa174c7b531235cc204 (image=quay.io/ceph/ceph:v18, name=stoic_wright, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:24 np0005480824 podman[96260]: 2025-10-11 03:21:24.17068918 +0000 UTC m=+0.165340067 container attach b107f32d7cb8018a363c7b849eab8a770632af2469ea9fa174c7b531235cc204 (image=quay.io/ceph/ceph:v18, name=stoic_wright, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct 10 23:21:24 np0005480824 ceph-mon[74326]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 23:21:24 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/2559142210' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 10 23:21:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct 10 23:21:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2289343200' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 10 23:21:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct 10 23:21:25 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/2289343200' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 10 23:21:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2289343200' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 10 23:21:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct 10 23:21:25 np0005480824 stoic_wright[96276]: enabled application 'rbd' on pool 'images'
Oct 10 23:21:25 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct 10 23:21:25 np0005480824 systemd[1]: libpod-b107f32d7cb8018a363c7b849eab8a770632af2469ea9fa174c7b531235cc204.scope: Deactivated successfully.
Oct 10 23:21:25 np0005480824 conmon[96276]: conmon b107f32d7cb8018a363c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b107f32d7cb8018a363c7b849eab8a770632af2469ea9fa174c7b531235cc204.scope/container/memory.events
Oct 10 23:21:25 np0005480824 podman[96260]: 2025-10-11 03:21:25.636238058 +0000 UTC m=+1.630888915 container died b107f32d7cb8018a363c7b849eab8a770632af2469ea9fa174c7b531235cc204 (image=quay.io/ceph/ceph:v18, name=stoic_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 23:21:25 np0005480824 systemd[1]: var-lib-containers-storage-overlay-be6fd82e91f0329a7a6b8e418d6d40875597aa837b33fa09cad0e8f0be7b4ca3-merged.mount: Deactivated successfully.
Oct 10 23:21:25 np0005480824 podman[96260]: 2025-10-11 03:21:25.696427716 +0000 UTC m=+1.691078553 container remove b107f32d7cb8018a363c7b849eab8a770632af2469ea9fa174c7b531235cc204 (image=quay.io/ceph/ceph:v18, name=stoic_wright, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 10 23:21:25 np0005480824 systemd[1]: libpod-conmon-b107f32d7cb8018a363c7b849eab8a770632af2469ea9fa174c7b531235cc204.scope: Deactivated successfully.
Oct 10 23:21:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:26 np0005480824 python3[96338]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:26 np0005480824 podman[96339]: 2025-10-11 03:21:26.132778589 +0000 UTC m=+0.065417291 container create eebef64989d49107f77b2cafbb7625817fb7d1422b15086977aae9f115b55388 (image=quay.io/ceph/ceph:v18, name=friendly_euclid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:26 np0005480824 systemd[1]: Started libpod-conmon-eebef64989d49107f77b2cafbb7625817fb7d1422b15086977aae9f115b55388.scope.
Oct 10 23:21:26 np0005480824 podman[96339]: 2025-10-11 03:21:26.104381244 +0000 UTC m=+0.037019956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:26 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffab39357500017c8d303d06b130a99e503baddcda58ec553e84a849d836dc90/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffab39357500017c8d303d06b130a99e503baddcda58ec553e84a849d836dc90/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:26 np0005480824 podman[96339]: 2025-10-11 03:21:26.233520294 +0000 UTC m=+0.166158976 container init eebef64989d49107f77b2cafbb7625817fb7d1422b15086977aae9f115b55388 (image=quay.io/ceph/ceph:v18, name=friendly_euclid, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 10 23:21:26 np0005480824 podman[96339]: 2025-10-11 03:21:26.242614457 +0000 UTC m=+0.175253149 container start eebef64989d49107f77b2cafbb7625817fb7d1422b15086977aae9f115b55388 (image=quay.io/ceph/ceph:v18, name=friendly_euclid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 10 23:21:26 np0005480824 podman[96339]: 2025-10-11 03:21:26.246348944 +0000 UTC m=+0.178987606 container attach eebef64989d49107f77b2cafbb7625817fb7d1422b15086977aae9f115b55388 (image=quay.io/ceph/ceph:v18, name=friendly_euclid, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:21:26 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/2289343200' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 10 23:21:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct 10 23:21:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/930825042' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 10 23:21:27 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/930825042' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 10 23:21:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct 10 23:21:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/930825042' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 10 23:21:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct 10 23:21:27 np0005480824 friendly_euclid[96355]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct 10 23:21:27 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct 10 23:21:27 np0005480824 systemd[1]: libpod-eebef64989d49107f77b2cafbb7625817fb7d1422b15086977aae9f115b55388.scope: Deactivated successfully.
Oct 10 23:21:27 np0005480824 podman[96380]: 2025-10-11 03:21:27.709321202 +0000 UTC m=+0.028683842 container died eebef64989d49107f77b2cafbb7625817fb7d1422b15086977aae9f115b55388 (image=quay.io/ceph/ceph:v18, name=friendly_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 10 23:21:27 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ffab39357500017c8d303d06b130a99e503baddcda58ec553e84a849d836dc90-merged.mount: Deactivated successfully.
Oct 10 23:21:27 np0005480824 podman[96380]: 2025-10-11 03:21:27.773303408 +0000 UTC m=+0.092665988 container remove eebef64989d49107f77b2cafbb7625817fb7d1422b15086977aae9f115b55388 (image=quay.io/ceph/ceph:v18, name=friendly_euclid, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:27 np0005480824 systemd[1]: libpod-conmon-eebef64989d49107f77b2cafbb7625817fb7d1422b15086977aae9f115b55388.scope: Deactivated successfully.
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:21:27
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['backups', 'volumes', 'vms', 'images', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:21:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct 10 23:21:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:21:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:21:28 np0005480824 python3[96420]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:28 np0005480824 podman[96421]: 2025-10-11 03:21:28.200298482 +0000 UTC m=+0.061264223 container create 286b3067ab63f7d93db7fa2f894159c5e9237b9149117631f9d2c9c6f47d704a (image=quay.io/ceph/ceph:v18, name=inspiring_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:28 np0005480824 systemd[1]: Started libpod-conmon-286b3067ab63f7d93db7fa2f894159c5e9237b9149117631f9d2c9c6f47d704a.scope.
Oct 10 23:21:28 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74065d47a538f82ebe127a82ddb5d15cf87c6c75dee8802a1928c6e5335df568/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74065d47a538f82ebe127a82ddb5d15cf87c6c75dee8802a1928c6e5335df568/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:28 np0005480824 podman[96421]: 2025-10-11 03:21:28.178101524 +0000 UTC m=+0.039067285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:28 np0005480824 podman[96421]: 2025-10-11 03:21:28.271786503 +0000 UTC m=+0.132752234 container init 286b3067ab63f7d93db7fa2f894159c5e9237b9149117631f9d2c9c6f47d704a (image=quay.io/ceph/ceph:v18, name=inspiring_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:28 np0005480824 podman[96421]: 2025-10-11 03:21:28.27849652 +0000 UTC m=+0.139462261 container start 286b3067ab63f7d93db7fa2f894159c5e9237b9149117631f9d2c9c6f47d704a (image=quay.io/ceph/ceph:v18, name=inspiring_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 10 23:21:28 np0005480824 podman[96421]: 2025-10-11 03:21:28.282857002 +0000 UTC m=+0.143822753 container attach 286b3067ab63f7d93db7fa2f894159c5e9237b9149117631f9d2c9c6f47d704a (image=quay.io/ceph/ceph:v18, name=inspiring_mcclintock, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 10 23:21:28 np0005480824 ceph-mon[74326]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 23:21:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:21:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct 10 23:21:28 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/930825042' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 10 23:21:28 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:21:28 np0005480824 ceph-mon[74326]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 23:21:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:21:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct 10 23:21:28 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct 10 23:21:28 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev 928c4d61-da8c-48ea-9dd6-d74f9a0f1d74 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 10 23:21:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct 10 23:21:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:21:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct 10 23:21:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4250262385' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 10 23:21:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct 10 23:21:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:21:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4250262385' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 10 23:21:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct 10 23:21:29 np0005480824 inspiring_mcclintock[96436]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct 10 23:21:29 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct 10 23:21:29 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev 293490f0-a14e-499b-bd23-3ed2306c9800 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 10 23:21:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct 10 23:21:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:21:29 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:21:29 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:21:29 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/4250262385' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 10 23:21:29 np0005480824 systemd[1]: libpod-286b3067ab63f7d93db7fa2f894159c5e9237b9149117631f9d2c9c6f47d704a.scope: Deactivated successfully.
Oct 10 23:21:29 np0005480824 podman[96421]: 2025-10-11 03:21:29.693817504 +0000 UTC m=+1.554783275 container died 286b3067ab63f7d93db7fa2f894159c5e9237b9149117631f9d2c9c6f47d704a (image=quay.io/ceph/ceph:v18, name=inspiring_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:21:29 np0005480824 systemd[1]: var-lib-containers-storage-overlay-74065d47a538f82ebe127a82ddb5d15cf87c6c75dee8802a1928c6e5335df568-merged.mount: Deactivated successfully.
Oct 10 23:21:29 np0005480824 podman[96421]: 2025-10-11 03:21:29.74538185 +0000 UTC m=+1.606347591 container remove 286b3067ab63f7d93db7fa2f894159c5e9237b9149117631f9d2c9c6f47d704a (image=quay.io/ceph/ceph:v18, name=inspiring_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:29 np0005480824 systemd[1]: libpod-conmon-286b3067ab63f7d93db7fa2f894159c5e9237b9149117631f9d2c9c6f47d704a.scope: Deactivated successfully.
Oct 10 23:21:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v88: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 10 23:21:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 10 23:21:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct 10 23:21:30 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=11.325587273s) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 66.561172485s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:30 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=11.325587273s) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown pruub 66.561172485s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:30 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev 56958acc-8312-480a-a74b-6b8049e3ce7e (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/4250262385' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:21:30 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:21:30 np0005480824 python3[96547]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 23:21:30 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 37 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=9.003601074s) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active pruub 59.458511353s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:30 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 37 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=9.003601074s) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown pruub 59.458511353s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 python3[96618]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760152890.3884876-33152-252349381958547/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:21:31 np0005480824 python3[96720]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 23:21:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct 10 23:21:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:21:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct 10 23:21:31 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct 10 23:21:31 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev 7e4b3803-169a-4a1c-acb6-e2bcd2af3fc2 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 10 23:21:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct 10 23:21:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.1f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.1e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.1c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.9( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.1d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.8( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.6( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.5( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.4( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.2( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.1( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.7( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.3( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.10( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.11( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.12( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.13( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.14( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.16( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.15( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.17( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.18( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.19( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.1a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.1b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=37/38 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=37/38 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.19( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 38 pg[2.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [2] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:31 np0005480824 ceph-mon[74326]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 10 23:21:31 np0005480824 ceph-mon[74326]: Cluster is now healthy
Oct 10 23:21:31 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:21:31 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:21:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v91: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 10 23:21:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 10 23:21:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:32 np0005480824 python3[96795]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760152891.344837-33166-128168589345585/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=51a22d869f0cbf1bec69424d058f219c9f17ffbc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:21:32 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct 10 23:21:32 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct 10 23:21:32 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Oct 10 23:21:32 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Oct 10 23:21:32 np0005480824 python3[96845]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:32 np0005480824 podman[96846]: 2025-10-11 03:21:32.492813441 +0000 UTC m=+0.055976190 container create c2d6935257aa2c89cfee32485f807f5e00c8791a4ef2616ceb34ca9832b23add (image=quay.io/ceph/ceph:v18, name=suspicious_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:32 np0005480824 systemd[1]: Started libpod-conmon-c2d6935257aa2c89cfee32485f807f5e00c8791a4ef2616ceb34ca9832b23add.scope.
Oct 10 23:21:32 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ecf19239cd23735e4a09d8a1c0b67c55a1a4414e00b444d58f6ff74d17870b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ecf19239cd23735e4a09d8a1c0b67c55a1a4414e00b444d58f6ff74d17870b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ecf19239cd23735e4a09d8a1c0b67c55a1a4414e00b444d58f6ff74d17870b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:32 np0005480824 podman[96846]: 2025-10-11 03:21:32.466208189 +0000 UTC m=+0.029371028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:32 np0005480824 podman[96846]: 2025-10-11 03:21:32.567997079 +0000 UTC m=+0.131159918 container init c2d6935257aa2c89cfee32485f807f5e00c8791a4ef2616ceb34ca9832b23add (image=quay.io/ceph/ceph:v18, name=suspicious_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 23:21:32 np0005480824 podman[96846]: 2025-10-11 03:21:32.578731451 +0000 UTC m=+0.141894200 container start c2d6935257aa2c89cfee32485f807f5e00c8791a4ef2616ceb34ca9832b23add (image=quay.io/ceph/ceph:v18, name=suspicious_vaughan, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 10 23:21:32 np0005480824 podman[96846]: 2025-10-11 03:21:32.592489271 +0000 UTC m=+0.155652000 container attach c2d6935257aa2c89cfee32485f807f5e00c8791a4ef2616ceb34ca9832b23add (image=quay.io/ceph/ceph:v18, name=suspicious_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 10 23:21:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct 10 23:21:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:21:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:21:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:21:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct 10 23:21:32 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct 10 23:21:32 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev 51d8cd6f-24ba-4da8-8f59-9fc081f9fc28 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct 10 23:21:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct 10 23:21:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:21:32 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=13.788617134s) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active pruub 65.971778870s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:32 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=13.788617134s) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown pruub 65.971778870s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:21:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:21:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:21:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:21:32 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 39 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=11.699681282s) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active pruub 74.495391846s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:32 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 39 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=11.699681282s) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 74.495391846s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:32 np0005480824 ceph-mgr[74617]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/994326972' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/994326972' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 10 23:21:33 np0005480824 suspicious_vaughan[96861]: 
Oct 10 23:21:33 np0005480824 suspicious_vaughan[96861]: [global]
Oct 10 23:21:33 np0005480824 suspicious_vaughan[96861]: #011fsid = 92cfe4d4-4917-5be1-9d00-73758793a62b
Oct 10 23:21:33 np0005480824 suspicious_vaughan[96861]: #011mon_host = 192.168.122.100
Oct 10 23:21:33 np0005480824 systemd[1]: libpod-c2d6935257aa2c89cfee32485f807f5e00c8791a4ef2616ceb34ca9832b23add.scope: Deactivated successfully.
Oct 10 23:21:33 np0005480824 podman[96887]: 2025-10-11 03:21:33.117897987 +0000 UTC m=+0.019463916 container died c2d6935257aa2c89cfee32485f807f5e00c8791a4ef2616ceb34ca9832b23add (image=quay.io/ceph/ceph:v18, name=suspicious_vaughan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:33 np0005480824 systemd[1]: var-lib-containers-storage-overlay-15ecf19239cd23735e4a09d8a1c0b67c55a1a4414e00b444d58f6ff74d17870b-merged.mount: Deactivated successfully.
Oct 10 23:21:33 np0005480824 podman[96887]: 2025-10-11 03:21:33.159162802 +0000 UTC m=+0.060728721 container remove c2d6935257aa2c89cfee32485f807f5e00c8791a4ef2616ceb34ca9832b23add (image=quay.io/ceph/ceph:v18, name=suspicious_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:33 np0005480824 systemd[1]: libpod-conmon-c2d6935257aa2c89cfee32485f807f5e00c8791a4ef2616ceb34ca9832b23add.scope: Deactivated successfully.
Oct 10 23:21:33 np0005480824 python3[97023]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:33 np0005480824 podman[97032]: 2025-10-11 03:21:33.521259089 +0000 UTC m=+0.036683619 container create 3f85b2284f09bbb3e8d00f058ef46d80b33891bd6364acca45765559ba7ae5cb (image=quay.io/ceph/ceph:v18, name=vigorous_panini, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:33 np0005480824 systemd[1]: Started libpod-conmon-3f85b2284f09bbb3e8d00f058ef46d80b33891bd6364acca45765559ba7ae5cb.scope.
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:21:33 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/107c94dcdf1650ea989a6f4d7a858613ce9163042f7181e086cd407c6b6c28b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/107c94dcdf1650ea989a6f4d7a858613ce9163042f7181e086cd407c6b6c28b5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/107c94dcdf1650ea989a6f4d7a858613ce9163042f7181e086cd407c6b6c28b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:33 np0005480824 podman[97032]: 2025-10-11 03:21:33.50463457 +0000 UTC m=+0.020059140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:33 np0005480824 podman[97032]: 2025-10-11 03:21:33.615108693 +0000 UTC m=+0.130533233 container init 3f85b2284f09bbb3e8d00f058ef46d80b33891bd6364acca45765559ba7ae5cb (image=quay.io/ceph/ceph:v18, name=vigorous_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:33 np0005480824 podman[97032]: 2025-10-11 03:21:33.625654749 +0000 UTC m=+0.141079269 container start 3f85b2284f09bbb3e8d00f058ef46d80b33891bd6364acca45765559ba7ae5cb (image=quay.io/ceph/ceph:v18, name=vigorous_panini, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:21:33 np0005480824 podman[97032]: 2025-10-11 03:21:33.628452815 +0000 UTC m=+0.143877355 container attach 3f85b2284f09bbb3e8d00f058ef46d80b33891bd6364acca45765559ba7ae5cb (image=quay.io/ceph/ceph:v18, name=vigorous_panini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct 10 23:21:33 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev 30029c3a-72b6-456c-a6ad-6eb9f72ab0c2 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 10 23:21:33 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev 928c4d61-da8c-48ea-9dd6-d74f9a0f1d74 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 10 23:21:33 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 928c4d61-da8c-48ea-9dd6-d74f9a0f1d74 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Oct 10 23:21:33 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev 293490f0-a14e-499b-bd23-3ed2306c9800 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 10 23:21:33 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 293490f0-a14e-499b-bd23-3ed2306c9800 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Oct 10 23:21:33 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev 56958acc-8312-480a-a74b-6b8049e3ce7e (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 10 23:21:33 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 56958acc-8312-480a-a74b-6b8049e3ce7e (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Oct 10 23:21:33 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev 7e4b3803-169a-4a1c-acb6-e2bcd2af3fc2 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 10 23:21:33 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 7e4b3803-169a-4a1c-acb6-e2bcd2af3fc2 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Oct 10 23:21:33 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev 51d8cd6f-24ba-4da8-8f59-9fc081f9fc28 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct 10 23:21:33 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 51d8cd6f-24ba-4da8-8f59-9fc081f9fc28 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Oct 10 23:21:33 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev 30029c3a-72b6-456c-a6ad-6eb9f72ab0c2 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 10 23:21:33 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 30029c3a-72b6-456c-a6ad-6eb9f72ab0c2 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.6( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=39/40 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.17( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 40 pg[4.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=39/40 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/994326972' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/994326972' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:21:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v94: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 10 23:21:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:33 np0005480824 podman[97116]: 2025-10-11 03:21:33.895734135 +0000 UTC m=+0.061157271 container exec a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 10 23:21:34 np0005480824 podman[97116]: 2025-10-11 03:21:34.012882214 +0000 UTC m=+0.178305310 container exec_died a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3525794381' entity='client.admin' 
Oct 10 23:21:34 np0005480824 vigorous_panini[97066]: set ssl_option
Oct 10 23:21:34 np0005480824 systemd[1]: libpod-3f85b2284f09bbb3e8d00f058ef46d80b33891bd6364acca45765559ba7ae5cb.scope: Deactivated successfully.
Oct 10 23:21:34 np0005480824 podman[97032]: 2025-10-11 03:21:34.294490349 +0000 UTC m=+0.809914869 container died 3f85b2284f09bbb3e8d00f058ef46d80b33891bd6364acca45765559ba7ae5cb (image=quay.io/ceph/ceph:v18, name=vigorous_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 10 23:21:34 np0005480824 systemd[1]: var-lib-containers-storage-overlay-107c94dcdf1650ea989a6f4d7a858613ce9163042f7181e086cd407c6b6c28b5-merged.mount: Deactivated successfully.
Oct 10 23:21:34 np0005480824 systemd[75890]: Starting Mark boot as successful...
Oct 10 23:21:34 np0005480824 systemd[75890]: Finished Mark boot as successful.
Oct 10 23:21:34 np0005480824 podman[97032]: 2025-10-11 03:21:34.339573653 +0000 UTC m=+0.854998173 container remove 3f85b2284f09bbb3e8d00f058ef46d80b33891bd6364acca45765559ba7ae5cb (image=quay.io/ceph/ceph:v18, name=vigorous_panini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:34 np0005480824 systemd[1]: libpod-conmon-3f85b2284f09bbb3e8d00f058ef46d80b33891bd6364acca45765559ba7ae5cb.scope: Deactivated successfully.
Oct 10 23:21:34 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct 10 23:21:34 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:34 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 31e28573-0365-4f75-a5b4-db46a497a4c9 does not exist
Oct 10 23:21:34 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev d6cc001a-a79d-4cce-b695-8470198ac398 does not exist
Oct 10 23:21:34 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev cb8626e0-be8c-45de-b995-e809d60eb96e does not exist
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:21:34 np0005480824 python3[97299]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:34 np0005480824 podman[97368]: 2025-10-11 03:21:34.665982905 +0000 UTC m=+0.038758757 container create f2ad548370140572b906de4b6ecc70ef3b2448768597239152063751a6a766a2 (image=quay.io/ceph/ceph:v18, name=festive_yonath, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 10 23:21:34 np0005480824 systemd[1]: Started libpod-conmon-f2ad548370140572b906de4b6ecc70ef3b2448768597239152063751a6a766a2.scope.
Oct 10 23:21:34 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:34 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3afd78716af74998e952d00457282d5fe952167d109db7d659dbe3b541935b84/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:34 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3afd78716af74998e952d00457282d5fe952167d109db7d659dbe3b541935b84/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:34 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3afd78716af74998e952d00457282d5fe952167d109db7d659dbe3b541935b84/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3525794381' entity='client.admin' 
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:21:34 np0005480824 podman[97368]: 2025-10-11 03:21:34.649031158 +0000 UTC m=+0.021807010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:34 np0005480824 podman[97368]: 2025-10-11 03:21:34.744388718 +0000 UTC m=+0.117164570 container init f2ad548370140572b906de4b6ecc70ef3b2448768597239152063751a6a766a2 (image=quay.io/ceph/ceph:v18, name=festive_yonath, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct 10 23:21:34 np0005480824 podman[97368]: 2025-10-11 03:21:34.751902064 +0000 UTC m=+0.124677916 container start f2ad548370140572b906de4b6ecc70ef3b2448768597239152063751a6a766a2 (image=quay.io/ceph/ceph:v18, name=festive_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 10 23:21:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct 10 23:21:34 np0005480824 podman[97368]: 2025-10-11 03:21:34.756258236 +0000 UTC m=+0.129034088 container attach f2ad548370140572b906de4b6ecc70ef3b2448768597239152063751a6a766a2 (image=quay.io/ceph/ceph:v18, name=festive_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 10 23:21:34 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41 pruub=13.741539955s) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active pruub 78.561820984s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:34 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41 pruub=13.741539955s) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown pruub 78.561820984s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 podman[97460]: 2025-10-11 03:21:35.020271549 +0000 UTC m=+0.048098355 container create 86183c8e46adb457a7687b59d7115979c290469c132068e49d59599d8eb24a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mahavira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 10 23:21:35 np0005480824 systemd[1]: Started libpod-conmon-86183c8e46adb457a7687b59d7115979c290469c132068e49d59599d8eb24a11.scope.
Oct 10 23:21:35 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:35 np0005480824 podman[97460]: 2025-10-11 03:21:35.082132426 +0000 UTC m=+0.109959262 container init 86183c8e46adb457a7687b59d7115979c290469c132068e49d59599d8eb24a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:35 np0005480824 podman[97460]: 2025-10-11 03:21:34.995885549 +0000 UTC m=+0.023712365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:35 np0005480824 podman[97460]: 2025-10-11 03:21:35.092005447 +0000 UTC m=+0.119832253 container start 86183c8e46adb457a7687b59d7115979c290469c132068e49d59599d8eb24a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:21:35 np0005480824 awesome_mahavira[97477]: 167 167
Oct 10 23:21:35 np0005480824 systemd[1]: libpod-86183c8e46adb457a7687b59d7115979c290469c132068e49d59599d8eb24a11.scope: Deactivated successfully.
Oct 10 23:21:35 np0005480824 conmon[97477]: conmon 86183c8e46adb457a768 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86183c8e46adb457a7687b59d7115979c290469c132068e49d59599d8eb24a11.scope/container/memory.events
Oct 10 23:21:35 np0005480824 podman[97460]: 2025-10-11 03:21:35.096180745 +0000 UTC m=+0.124007561 container attach 86183c8e46adb457a7687b59d7115979c290469c132068e49d59599d8eb24a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mahavira, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 10 23:21:35 np0005480824 podman[97460]: 2025-10-11 03:21:35.096650535 +0000 UTC m=+0.124477351 container died 86183c8e46adb457a7687b59d7115979c290469c132068e49d59599d8eb24a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 10 23:21:35 np0005480824 systemd[1]: var-lib-containers-storage-overlay-2c72dedcbcd8bb8690e9f701b0f5b6120f6e8f12441008423bc779806d954f3f-merged.mount: Deactivated successfully.
Oct 10 23:21:35 np0005480824 podman[97460]: 2025-10-11 03:21:35.144173126 +0000 UTC m=+0.171999942 container remove 86183c8e46adb457a7687b59d7115979c290469c132068e49d59599d8eb24a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mahavira, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 10 23:21:35 np0005480824 systemd[1]: libpod-conmon-86183c8e46adb457a7687b59d7115979c290469c132068e49d59599d8eb24a11.scope: Deactivated successfully.
Oct 10 23:21:35 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:21:35 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Oct 10 23:21:35 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 10 23:21:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 10 23:21:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:35 np0005480824 festive_yonath[97413]: Scheduled rgw.rgw update...
Oct 10 23:21:35 np0005480824 systemd[1]: libpod-f2ad548370140572b906de4b6ecc70ef3b2448768597239152063751a6a766a2.scope: Deactivated successfully.
Oct 10 23:21:35 np0005480824 podman[97368]: 2025-10-11 03:21:35.347076401 +0000 UTC m=+0.719852263 container died f2ad548370140572b906de4b6ecc70ef3b2448768597239152063751a6a766a2 (image=quay.io/ceph/ceph:v18, name=festive_yonath, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 10 23:21:35 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct 10 23:21:35 np0005480824 podman[97520]: 2025-10-11 03:21:35.366755191 +0000 UTC m=+0.066355243 container create c368809c7279ebb3b5f7aa41bb44e677d4b6d78906ee56d6439c4c9dae66493d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 10 23:21:35 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct 10 23:21:35 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3afd78716af74998e952d00457282d5fe952167d109db7d659dbe3b541935b84-merged.mount: Deactivated successfully.
Oct 10 23:21:35 np0005480824 podman[97368]: 2025-10-11 03:21:35.423456036 +0000 UTC m=+0.796231898 container remove f2ad548370140572b906de4b6ecc70ef3b2448768597239152063751a6a766a2 (image=quay.io/ceph/ceph:v18, name=festive_yonath, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 10 23:21:35 np0005480824 podman[97520]: 2025-10-11 03:21:35.33509696 +0000 UTC m=+0.034697082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:35 np0005480824 systemd[1]: Started libpod-conmon-c368809c7279ebb3b5f7aa41bb44e677d4b6d78906ee56d6439c4c9dae66493d.scope.
Oct 10 23:21:35 np0005480824 systemd[1]: libpod-conmon-f2ad548370140572b906de4b6ecc70ef3b2448768597239152063751a6a766a2.scope: Deactivated successfully.
Oct 10 23:21:35 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7675d3dc724ef5bdcb58262aa46512f3a027174c8a7a349c7357f418004bedee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7675d3dc724ef5bdcb58262aa46512f3a027174c8a7a349c7357f418004bedee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7675d3dc724ef5bdcb58262aa46512f3a027174c8a7a349c7357f418004bedee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7675d3dc724ef5bdcb58262aa46512f3a027174c8a7a349c7357f418004bedee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7675d3dc724ef5bdcb58262aa46512f3a027174c8a7a349c7357f418004bedee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:35 np0005480824 podman[97520]: 2025-10-11 03:21:35.488134769 +0000 UTC m=+0.187734851 container init c368809c7279ebb3b5f7aa41bb44e677d4b6d78906ee56d6439c4c9dae66493d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:21:35 np0005480824 podman[97520]: 2025-10-11 03:21:35.496685779 +0000 UTC m=+0.196285821 container start c368809c7279ebb3b5f7aa41bb44e677d4b6d78906ee56d6439c4c9dae66493d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:35 np0005480824 podman[97520]: 2025-10-11 03:21:35.500892318 +0000 UTC m=+0.200492370 container attach c368809c7279ebb3b5f7aa41bb44e677d4b6d78906ee56d6439c4c9dae66493d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=14.859174728s) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active pruub 75.083763123s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=14.859174728s) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown pruub 75.083763123s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct 10 23:21:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:21:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:21:35 np0005480824 ceph-mon[74326]: Saving service rgw.rgw spec with placement compute-0
Oct 10 23:21:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.0( empty local-lis/les=41/42 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.12( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.17( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.16( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.7( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.d( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.19( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=28/29 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [0] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.12( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.17( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.16( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.0( empty local-lis/les=41/42 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.7( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.d( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.19( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=28/28 les/c/f=29/29/0 sis=41) [1] r=0 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 124 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:36 np0005480824 python3[97639]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 23:21:36 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct 10 23:21:36 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct 10 23:21:36 np0005480824 inspiring_taussig[97551]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:21:36 np0005480824 inspiring_taussig[97551]: --> relative data size: 1.0
Oct 10 23:21:36 np0005480824 inspiring_taussig[97551]: --> All data devices are unavailable
Oct 10 23:21:36 np0005480824 systemd[1]: libpod-c368809c7279ebb3b5f7aa41bb44e677d4b6d78906ee56d6439c4c9dae66493d.scope: Deactivated successfully.
Oct 10 23:21:36 np0005480824 podman[97520]: 2025-10-11 03:21:36.711815511 +0000 UTC m=+1.411415533 container died c368809c7279ebb3b5f7aa41bb44e677d4b6d78906ee56d6439c4c9dae66493d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:36 np0005480824 systemd[1]: libpod-c368809c7279ebb3b5f7aa41bb44e677d4b6d78906ee56d6439c4c9dae66493d.scope: Consumed 1.107s CPU time.
Oct 10 23:21:36 np0005480824 systemd[1]: var-lib-containers-storage-overlay-7675d3dc724ef5bdcb58262aa46512f3a027174c8a7a349c7357f418004bedee-merged.mount: Deactivated successfully.
Oct 10 23:21:36 np0005480824 python3[97722]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760152896.0833523-33207-41785969859009/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:21:36 np0005480824 podman[97520]: 2025-10-11 03:21:36.786084919 +0000 UTC m=+1.485684961 container remove c368809c7279ebb3b5f7aa41bb44e677d4b6d78906ee56d6439c4c9dae66493d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:21:36 np0005480824 systemd[1]: libpod-conmon-c368809c7279ebb3b5f7aa41bb44e677d4b6d78906ee56d6439c4c9dae66493d.scope: Deactivated successfully.
Oct 10 23:21:37 np0005480824 python3[97888]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:37 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct 10 23:21:37 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct 10 23:21:37 np0005480824 podman[97904]: 2025-10-11 03:21:37.406501395 +0000 UTC m=+0.074395601 container create 88f9e08603b62430f2ffa547f1ae693d70fdb45b220ed4e3b75a52766a04a973 (image=quay.io/ceph/ceph:v18, name=loving_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 10 23:21:37 np0005480824 systemd[1]: Started libpod-conmon-88f9e08603b62430f2ffa547f1ae693d70fdb45b220ed4e3b75a52766a04a973.scope.
Oct 10 23:21:37 np0005480824 podman[97904]: 2025-10-11 03:21:37.378025979 +0000 UTC m=+0.045920245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:37 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2d146c1aa723d0c9dc4e553ee9a8d13c28de0f1734789b2511e01e858d862c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2d146c1aa723d0c9dc4e553ee9a8d13c28de0f1734789b2511e01e858d862c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2d146c1aa723d0c9dc4e553ee9a8d13c28de0f1734789b2511e01e858d862c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:37 np0005480824 podman[97904]: 2025-10-11 03:21:37.494095633 +0000 UTC m=+0.161989879 container init 88f9e08603b62430f2ffa547f1ae693d70fdb45b220ed4e3b75a52766a04a973 (image=quay.io/ceph/ceph:v18, name=loving_montalcini, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 23:21:37 np0005480824 podman[97904]: 2025-10-11 03:21:37.504363384 +0000 UTC m=+0.172257590 container start 88f9e08603b62430f2ffa547f1ae693d70fdb45b220ed4e3b75a52766a04a973 (image=quay.io/ceph/ceph:v18, name=loving_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:37 np0005480824 podman[97904]: 2025-10-11 03:21:37.5076414 +0000 UTC m=+0.175535606 container attach 88f9e08603b62430f2ffa547f1ae693d70fdb45b220ed4e3b75a52766a04a973 (image=quay.io/ceph/ceph:v18, name=loving_montalcini, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 23:21:37 np0005480824 podman[97947]: 2025-10-11 03:21:37.610471264 +0000 UTC m=+0.069253860 container create c4aabe28723d4b788be8f36aa59400e963a1f3a3d5b6a9e3c545392e70ae2623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_herschel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:37 np0005480824 systemd[1]: Started libpod-conmon-c4aabe28723d4b788be8f36aa59400e963a1f3a3d5b6a9e3c545392e70ae2623.scope.
Oct 10 23:21:37 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:37 np0005480824 podman[97947]: 2025-10-11 03:21:37.581460636 +0000 UTC m=+0.040243292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:37 np0005480824 podman[97947]: 2025-10-11 03:21:37.693399684 +0000 UTC m=+0.152182310 container init c4aabe28723d4b788be8f36aa59400e963a1f3a3d5b6a9e3c545392e70ae2623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:37 np0005480824 podman[97947]: 2025-10-11 03:21:37.703665044 +0000 UTC m=+0.162447640 container start c4aabe28723d4b788be8f36aa59400e963a1f3a3d5b6a9e3c545392e70ae2623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:37 np0005480824 magical_herschel[97963]: 167 167
Oct 10 23:21:37 np0005480824 systemd[1]: libpod-c4aabe28723d4b788be8f36aa59400e963a1f3a3d5b6a9e3c545392e70ae2623.scope: Deactivated successfully.
Oct 10 23:21:37 np0005480824 podman[97947]: 2025-10-11 03:21:37.71077454 +0000 UTC m=+0.169557146 container attach c4aabe28723d4b788be8f36aa59400e963a1f3a3d5b6a9e3c545392e70ae2623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:21:37 np0005480824 podman[97947]: 2025-10-11 03:21:37.711285211 +0000 UTC m=+0.170067797 container died c4aabe28723d4b788be8f36aa59400e963a1f3a3d5b6a9e3c545392e70ae2623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_herschel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:37 np0005480824 systemd[1]: var-lib-containers-storage-overlay-60ee536955c4192afd3df58954d8ffc5337f371ecd2e7fc45e0ea0b34aa55126-merged.mount: Deactivated successfully.
Oct 10 23:21:37 np0005480824 podman[97947]: 2025-10-11 03:21:37.765569141 +0000 UTC m=+0.224351707 container remove c4aabe28723d4b788be8f36aa59400e963a1f3a3d5b6a9e3c545392e70ae2623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_herschel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:37 np0005480824 systemd[1]: libpod-conmon-c4aabe28723d4b788be8f36aa59400e963a1f3a3d5b6a9e3c545392e70ae2623.scope: Deactivated successfully.
Oct 10 23:21:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 10 23:21:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 10 23:21:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 10 23:21:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 10 23:21:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 10 23:21:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 10 23:21:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:37 np0005480824 ceph-mgr[74617]: [progress INFO root] Writing back 9 completed events
Oct 10 23:21:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 10 23:21:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:37 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 1f2c682a-dd7f-4b17-b0ca-69c6d4ec14f2 (Global Recovery Event) in 5 seconds
Oct 10 23:21:37 np0005480824 podman[98007]: 2025-10-11 03:21:37.9968919 +0000 UTC m=+0.066000955 container create 0aa72973236a96c75c2ec7714a2c95f25c37d62d58c966e50aa8a1f65d7ede92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_perlman, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 10 23:21:38 np0005480824 systemd[1]: Started libpod-conmon-0aa72973236a96c75c2ec7714a2c95f25c37d62d58c966e50aa8a1f65d7ede92.scope.
Oct 10 23:21:38 np0005480824 podman[98007]: 2025-10-11 03:21:37.970001271 +0000 UTC m=+0.039110416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:38 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:21:38 np0005480824 ceph-mgr[74617]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 10 23:21:38 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0[74322]: 2025-10-11T03:21:38.069+0000 7f75a5164640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).mds e2 new map
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-11T03:21:38.069496+0000#012modified#0112025-10-11T03:21:38.069530+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct 10 23:21:38 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:38 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 10 23:21:38 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 10 23:21:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86845a5e832326146278ce7160765ed26c4a91f6f4ac37d83f4ea14906b5173/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86845a5e832326146278ce7160765ed26c4a91f6f4ac37d83f4ea14906b5173/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86845a5e832326146278ce7160765ed26c4a91f6f4ac37d83f4ea14906b5173/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86845a5e832326146278ce7160765ed26c4a91f6f4ac37d83f4ea14906b5173/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.615374565s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.196159363s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.615329742s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.196159363s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.619762421s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.200622559s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.634817123s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215721130s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.619686127s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.200622559s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.615118027s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.196113586s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.634754181s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215721130s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.615103722s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.196113586s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.614924431s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.196060181s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.614909172s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.196060181s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.614861488s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.196083069s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.634467125s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215721130s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.634451866s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215721130s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.614774704s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.196060181s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.614828110s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.196083069s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.614697456s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.196060181s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.614683151s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.196105957s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.614667892s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.196105957s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633798599s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215431213s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633791924s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215446472s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633773804s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215446472s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633759499s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215431213s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.614321709s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.196022034s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.614290237s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.196022034s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633539200s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215492249s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633535385s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215515137s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633518219s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215515137s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633506775s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215492249s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.613873482s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.195960999s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.613860130s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195960999s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633400917s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215538025s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633364677s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215538025s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633387566s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215591431s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.613725662s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.195938110s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633373260s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215591431s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.613695145s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195938110s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633309364s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215690613s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.613478661s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.195854187s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633293152s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215690613s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.613438606s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195854187s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.613205910s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.195709229s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.613379478s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.195854187s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.613158226s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195709229s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.613311768s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195854187s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.613191605s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.195831299s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633053780s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215713501s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.613173485s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195831299s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633010864s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215736389s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.612923622s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.195671082s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.633019447s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215713501s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.612907410s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195671082s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.632984161s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215736389s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.632921219s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215766907s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.632909775s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215766907s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.632845879s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215766907s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.632833481s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215766907s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.612687111s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.195640564s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.612657547s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195640564s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.632855415s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215896606s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.632841110s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215896606s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.608706474s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.191780090s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.632618904s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215751648s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.632606506s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215751648s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.608670235s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.191780090s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.608590126s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.191757202s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.612564087s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.195808411s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.608551025s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.191757202s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.612546921s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195808411s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.608389854s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.191772461s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.608290672s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.191696167s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.608245850s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.191696167s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.608345032s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.191772461s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.632308006s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215812683s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.608123779s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.191703796s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.608087540s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.191703796s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.632266045s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215812683s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.608062744s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.191780090s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.632083893s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215820312s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.632062912s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215820312s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.608027458s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.191780090s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.607619286s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 67.191505432s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.607603073s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.191505432s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.631954193s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215919495s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.631882668s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215919495s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.631808281s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 69.215927124s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.631772995s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.215927124s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:38 np0005480824 ceph-mgr[74617]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 podman[98007]: 2025-10-11 03:21:38.112096114 +0000 UTC m=+0.181205189 container init 0aa72973236a96c75c2ec7714a2c95f25c37d62d58c966e50aa8a1f65d7ede92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.675602913s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.357475281s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.675574303s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.357475281s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 podman[98007]: 2025-10-11 03:21:38.120307925 +0000 UTC m=+0.189416980 container start 0aa72973236a96c75c2ec7714a2c95f25c37d62d58c966e50aa8a1f65d7ede92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.599524498s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.780014038s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.599483490s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.780014038s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.673264503s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.853988647s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.673239708s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.853988647s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.668630600s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.849540710s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.668607712s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.849540710s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.669470787s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.850624084s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.669445038s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.850624084s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.598632812s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779960632s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.598606110s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779960632s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.598352432s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779899597s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.598325729s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779899597s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.598170280s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779869080s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667115211s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.848831177s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.598146439s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779869080s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667079926s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.848831177s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.597974777s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779846191s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.597955704s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779846191s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667649269s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.849700928s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.597777367s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779869080s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.597757339s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779869080s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667617798s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.849700928s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667481422s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.849739075s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.597556114s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779830933s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667460442s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.849739075s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.597531319s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779830933s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667690277s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.850067139s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667673111s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.850067139s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.597392082s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779838562s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667927742s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.850425720s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.597369194s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779838562s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667911530s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.850425720s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667894363s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.850540161s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.671070099s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.853713989s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.671044350s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.853713989s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667877197s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.850540161s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.596932411s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779800415s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.596904755s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779800415s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.670798302s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.853721619s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.670737267s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.853721619s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.596660614s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779754639s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.596641541s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779754639s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.670618057s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.853767395s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.670591354s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.853767395s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.596465111s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779739380s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.596442223s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779739380s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.670441628s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.853836060s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.596385002s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779823303s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.670414925s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.853836060s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.596365929s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779823303s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.596142769s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779708862s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.578369141s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.264091492s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.578315735s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.264091492s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.577074051s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.264038086s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.576991081s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.264038086s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.668838501s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.357452393s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.668801308s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.357452393s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.575293541s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.264022827s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.575278282s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.264022827s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.575189590s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.264015198s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.575177193s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.264015198s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.668744087s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.357650757s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.668728828s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.357650757s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.574977875s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.263992310s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.574966431s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.263992310s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.574693680s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.263961792s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.596127510s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779708862s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.596003532s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779647827s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.670160294s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.853851318s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.670144081s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.853851318s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.595969200s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779647827s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.595884323s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779685974s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.595869064s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779685974s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.595625877s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779586792s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.669927597s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.853858948s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.595606804s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779586792s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.669864655s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.853858948s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.595607758s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779708862s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.595591545s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779708862s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.669777870s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.853965759s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.595392227s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779624939s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.669752121s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.853965759s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.595373154s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779624939s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.669539452s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.853904724s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.669570923s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.853965759s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.669516563s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.853904724s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.669551849s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.853965759s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.669373512s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 81.853919983s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.669353485s) [1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.853919983s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.595279694s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779869080s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.595243454s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779869080s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.594975471s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 79.779785156s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=11.594938278s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.779785156s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.574664116s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.263961792s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.668307304s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.357688904s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.668292046s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.357688904s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[6.15( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[6.14( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[6.11( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[6.13( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[6.f( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[6.8( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[6.1f( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.574384689s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.263885498s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.574370384s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.263885498s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.574227333s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.263832092s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.574211121s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.263832092s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667987823s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.357696533s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667973518s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.357696533s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667943001s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.357742310s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667931557s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.357742310s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.574183464s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.264076233s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.574171066s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.264076233s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667744637s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.357719421s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667733192s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.357719421s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667676926s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.357749939s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.667664528s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.357749939s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.676156998s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.366325378s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.676143646s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.366325378s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.676047325s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.366325378s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.676033974s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.366325378s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.573372841s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.263801575s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.573357582s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.263801575s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.675830841s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.366355896s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.675819397s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.366355896s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.573472023s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.264083862s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.573459625s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.264083862s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.572286606s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.263572693s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.675160408s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.366470337s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.572248459s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.263572693s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.675030708s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.366470337s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.572052956s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.263519287s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.675066948s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.366539001s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.572057724s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.263557434s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674992561s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.366531372s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.572035789s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.263557434s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.571991920s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.263519287s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674978256s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.366531372s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674989700s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.366539001s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.571862221s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.263488770s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.571847916s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.263488770s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674872398s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.366523743s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674849510s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.366523743s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.571741104s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.263442993s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.571691513s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.263465881s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 podman[98007]: 2025-10-11 03:21:38.134719163 +0000 UTC m=+0.203828238 container attach 0aa72973236a96c75c2ec7714a2c95f25c37d62d58c966e50aa8a1f65d7ede92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674754143s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.366531372s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674734116s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.366539001s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.571710587s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.263442993s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.571659088s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.263465881s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674698830s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.366531372s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674601555s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.366539001s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.565161705s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.257133484s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674630165s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.366600037s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.565142632s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.257133484s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674606323s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.366600037s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674600601s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.366600037s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674575806s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.366600037s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.571716309s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.263771057s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.571702003s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.263771057s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674525261s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 76.366630554s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.674493790s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.366630554s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.564942360s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.257118225s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.564922333s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.257118225s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[6.17( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.570980072s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 72.263526917s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=9.570951462s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.263526917s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[6.d( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[6.c( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[6.1( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[6.b( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[6.4( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[6.1e( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[6.1c( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[6.1d( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 systemd[1]: libpod-88f9e08603b62430f2ffa547f1ae693d70fdb45b220ed4e3b75a52766a04a973.scope: Deactivated successfully.
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 podman[97904]: 2025-10-11 03:21:38.144111442 +0000 UTC m=+0.812005668 container died 88f9e08603b62430f2ffa547f1ae693d70fdb45b220ed4e3b75a52766a04a973 (image=quay.io/ceph/ceph:v18, name=loving_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=0/0 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:38 np0005480824 systemd[1]: var-lib-containers-storage-overlay-5a2d146c1aa723d0c9dc4e553ee9a8d13c28de0f1734789b2511e01e858d862c-merged.mount: Deactivated successfully.
Oct 10 23:21:38 np0005480824 podman[97904]: 2025-10-11 03:21:38.236328238 +0000 UTC m=+0.904222444 container remove 88f9e08603b62430f2ffa547f1ae693d70fdb45b220ed4e3b75a52766a04a973 (image=quay.io/ceph/ceph:v18, name=loving_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:38 np0005480824 systemd[1]: libpod-conmon-88f9e08603b62430f2ffa547f1ae693d70fdb45b220ed4e3b75a52766a04a973.scope: Deactivated successfully.
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.c deep-scrub starts
Oct 10 23:21:38 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.c deep-scrub ok
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:21:38 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct 10 23:21:38 np0005480824 python3[98068]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:38 np0005480824 podman[98069]: 2025-10-11 03:21:38.694897691 +0000 UTC m=+0.060369492 container create ac7516141e77f96b15d56056dccf64191d9610523ddd0a797ab4ff51f024124b (image=quay.io/ceph/ceph:v18, name=heuristic_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:38 np0005480824 systemd[1]: Started libpod-conmon-ac7516141e77f96b15d56056dccf64191d9610523ddd0a797ab4ff51f024124b.scope.
Oct 10 23:21:38 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:38 np0005480824 podman[98069]: 2025-10-11 03:21:38.670473819 +0000 UTC m=+0.035945610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f0690ab30d46b27dcfd75bb1aaa291d21e449338da058f2195ed8570cdf33e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f0690ab30d46b27dcfd75bb1aaa291d21e449338da058f2195ed8570cdf33e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f0690ab30d46b27dcfd75bb1aaa291d21e449338da058f2195ed8570cdf33e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:38 np0005480824 podman[98069]: 2025-10-11 03:21:38.780768189 +0000 UTC m=+0.146240040 container init ac7516141e77f96b15d56056dccf64191d9610523ddd0a797ab4ff51f024124b (image=quay.io/ceph/ceph:v18, name=heuristic_ardinghelli, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 10 23:21:38 np0005480824 podman[98069]: 2025-10-11 03:21:38.793765422 +0000 UTC m=+0.159237183 container start ac7516141e77f96b15d56056dccf64191d9610523ddd0a797ab4ff51f024124b (image=quay.io/ceph/ceph:v18, name=heuristic_ardinghelli, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 10 23:21:38 np0005480824 podman[98069]: 2025-10-11 03:21:38.79961772 +0000 UTC m=+0.165089521 container attach ac7516141e77f96b15d56056dccf64191d9610523ddd0a797ab4ff51f024124b (image=quay.io/ceph/ceph:v18, name=heuristic_ardinghelli, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: Saving service mds.cephfs spec with placement compute-0
Oct 10 23:21:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:38 np0005480824 charming_perlman[98024]: {
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:    "0": [
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:        {
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "devices": [
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "/dev/loop3"
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            ],
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_name": "ceph_lv0",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_size": "21470642176",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "name": "ceph_lv0",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "tags": {
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.cluster_name": "ceph",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.crush_device_class": "",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.encrypted": "0",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.osd_id": "0",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.type": "block",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.vdo": "0"
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            },
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "type": "block",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "vg_name": "ceph_vg0"
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:        }
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:    ],
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:    "1": [
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:        {
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "devices": [
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "/dev/loop4"
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            ],
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_name": "ceph_lv1",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_size": "21470642176",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "name": "ceph_lv1",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "tags": {
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.cluster_name": "ceph",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.crush_device_class": "",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.encrypted": "0",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.osd_id": "1",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.type": "block",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.vdo": "0"
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            },
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "type": "block",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "vg_name": "ceph_vg1"
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:        }
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:    ],
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:    "2": [
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:        {
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "devices": [
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "/dev/loop5"
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            ],
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_name": "ceph_lv2",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_size": "21470642176",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "name": "ceph_lv2",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "tags": {
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.cluster_name": "ceph",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.crush_device_class": "",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.encrypted": "0",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.osd_id": "2",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.type": "block",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:                "ceph.vdo": "0"
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            },
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "type": "block",
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:            "vg_name": "ceph_vg2"
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:        }
Oct 10 23:21:38 np0005480824 charming_perlman[98024]:    ]
Oct 10 23:21:38 np0005480824 charming_perlman[98024]: }
Oct 10 23:21:38 np0005480824 systemd[1]: libpod-0aa72973236a96c75c2ec7714a2c95f25c37d62d58c966e50aa8a1f65d7ede92.scope: Deactivated successfully.
Oct 10 23:21:39 np0005480824 podman[98093]: 2025-10-11 03:21:39.022933921 +0000 UTC m=+0.038783928 container died 0aa72973236a96c75c2ec7714a2c95f25c37d62d58c966e50aa8a1f65d7ede92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_perlman, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:39 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c86845a5e832326146278ce7160765ed26c4a91f6f4ac37d83f4ea14906b5173-merged.mount: Deactivated successfully.
Oct 10 23:21:39 np0005480824 podman[98093]: 2025-10-11 03:21:39.086545339 +0000 UTC m=+0.102395336 container remove 0aa72973236a96c75c2ec7714a2c95f25c37d62d58c966e50aa8a1f65d7ede92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_perlman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 10 23:21:39 np0005480824 systemd[1]: libpod-conmon-0aa72973236a96c75c2ec7714a2c95f25c37d62d58c966e50aa8a1f65d7ede92.scope: Deactivated successfully.
Oct 10 23:21:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct 10 23:21:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct 10 23:21:39 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[6.1f( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[6.13( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[6.11( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[6.15( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[6.14( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[6.8( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[6.f( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=41/28 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=43/44 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[6.17( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[6.1c( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[6.1d( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[6.4( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=43/44 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[6.1e( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 44 pg[6.c( empty local-lis/les=43/44 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=43) [1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:39 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 23:21:39 np0005480824 ceph-mgr[74617]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 10 23:21:39 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 10 23:21:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 10 23:21:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:39 np0005480824 heuristic_ardinghelli[98085]: Scheduled mds.cephfs update...
Oct 10 23:21:39 np0005480824 systemd[1]: libpod-ac7516141e77f96b15d56056dccf64191d9610523ddd0a797ab4ff51f024124b.scope: Deactivated successfully.
Oct 10 23:21:39 np0005480824 podman[98069]: 2025-10-11 03:21:39.430982613 +0000 UTC m=+0.796454384 container died ac7516141e77f96b15d56056dccf64191d9610523ddd0a797ab4ff51f024124b (image=quay.io/ceph/ceph:v18, name=heuristic_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 10 23:21:39 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d3f0690ab30d46b27dcfd75bb1aaa291d21e449338da058f2195ed8570cdf33e-merged.mount: Deactivated successfully.
Oct 10 23:21:39 np0005480824 podman[98069]: 2025-10-11 03:21:39.483276905 +0000 UTC m=+0.848748686 container remove ac7516141e77f96b15d56056dccf64191d9610523ddd0a797ab4ff51f024124b (image=quay.io/ceph/ceph:v18, name=heuristic_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 23:21:39 np0005480824 systemd[1]: libpod-conmon-ac7516141e77f96b15d56056dccf64191d9610523ddd0a797ab4ff51f024124b.scope: Deactivated successfully.
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Oct 10 23:21:39 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Oct 10 23:21:39 np0005480824 podman[98283]: 2025-10-11 03:21:39.673262798 +0000 UTC m=+0.033098795 container create 9451fff9b54d27dd32f151105593d891670df199076c29c3a903d9d1a6d244a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:39 np0005480824 systemd[1]: Started libpod-conmon-9451fff9b54d27dd32f151105593d891670df199076c29c3a903d9d1a6d244a4.scope.
Oct 10 23:21:39 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:39 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 23:21:39 np0005480824 podman[98283]: 2025-10-11 03:21:39.747533044 +0000 UTC m=+0.107369051 container init 9451fff9b54d27dd32f151105593d891670df199076c29c3a903d9d1a6d244a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:39 np0005480824 podman[98283]: 2025-10-11 03:21:39.75463039 +0000 UTC m=+0.114466387 container start 9451fff9b54d27dd32f151105593d891670df199076c29c3a903d9d1a6d244a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:39 np0005480824 podman[98283]: 2025-10-11 03:21:39.65966914 +0000 UTC m=+0.019505157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:39 np0005480824 pedantic_wilbur[98299]: 167 167
Oct 10 23:21:39 np0005480824 podman[98283]: 2025-10-11 03:21:39.758665755 +0000 UTC m=+0.118501782 container attach 9451fff9b54d27dd32f151105593d891670df199076c29c3a903d9d1a6d244a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 10 23:21:39 np0005480824 systemd[1]: libpod-9451fff9b54d27dd32f151105593d891670df199076c29c3a903d9d1a6d244a4.scope: Deactivated successfully.
Oct 10 23:21:39 np0005480824 podman[98283]: 2025-10-11 03:21:39.759535225 +0000 UTC m=+0.119371232 container died 9451fff9b54d27dd32f151105593d891670df199076c29c3a903d9d1a6d244a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:21:39 np0005480824 systemd[1]: var-lib-containers-storage-overlay-b14eb36129d370eb8068cb735ded57c8d005c26e45276200f729afeff201b14d-merged.mount: Deactivated successfully.
Oct 10 23:21:39 np0005480824 podman[98283]: 2025-10-11 03:21:39.801490156 +0000 UTC m=+0.161326163 container remove 9451fff9b54d27dd32f151105593d891670df199076c29c3a903d9d1a6d244a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:39 np0005480824 systemd[1]: libpod-conmon-9451fff9b54d27dd32f151105593d891670df199076c29c3a903d9d1a6d244a4.scope: Deactivated successfully.
Oct 10 23:21:39 np0005480824 ceph-mon[74326]: Saving service mds.cephfs spec with placement compute-0
Oct 10 23:21:39 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:39 np0005480824 podman[98374]: 2025-10-11 03:21:39.95777293 +0000 UTC m=+0.035513622 container create c0fe18e12728dcbb4fa11734e9a40a7ab6765814b0769455047964535067ae78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lehmann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:39 np0005480824 systemd[1]: Started libpod-conmon-c0fe18e12728dcbb4fa11734e9a40a7ab6765814b0769455047964535067ae78.scope.
Oct 10 23:21:40 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57766399c3d0b47066b859f915b81f5b52226c5756368e08e95a24309dea330/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57766399c3d0b47066b859f915b81f5b52226c5756368e08e95a24309dea330/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57766399c3d0b47066b859f915b81f5b52226c5756368e08e95a24309dea330/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57766399c3d0b47066b859f915b81f5b52226c5756368e08e95a24309dea330/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:40 np0005480824 podman[98374]: 2025-10-11 03:21:39.943032005 +0000 UTC m=+0.020772717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:40 np0005480824 podman[98374]: 2025-10-11 03:21:40.050213642 +0000 UTC m=+0.127954384 container init c0fe18e12728dcbb4fa11734e9a40a7ab6765814b0769455047964535067ae78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 23:21:40 np0005480824 podman[98374]: 2025-10-11 03:21:40.05615213 +0000 UTC m=+0.133892832 container start c0fe18e12728dcbb4fa11734e9a40a7ab6765814b0769455047964535067ae78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:21:40 np0005480824 podman[98374]: 2025-10-11 03:21:40.059164841 +0000 UTC m=+0.136905533 container attach c0fe18e12728dcbb4fa11734e9a40a7ab6765814b0769455047964535067ae78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lehmann, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 10 23:21:40 np0005480824 python3[98418]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 23:21:40 np0005480824 python3[98494]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760152899.844412-33237-61221751167515/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=3cf916d5f4489610cb8b254ce9c8bcc669faf03d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:21:41 np0005480824 python3[98558]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]: {
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "osd_id": 0,
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "type": "bluestore"
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:    },
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "osd_id": 1,
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "type": "bluestore"
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:    },
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "osd_id": 2,
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:        "type": "bluestore"
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]:    }
Oct 10 23:21:41 np0005480824 jovial_lehmann[98416]: }
Oct 10 23:21:41 np0005480824 systemd[1]: libpod-c0fe18e12728dcbb4fa11734e9a40a7ab6765814b0769455047964535067ae78.scope: Deactivated successfully.
Oct 10 23:21:41 np0005480824 podman[98374]: 2025-10-11 03:21:41.150236082 +0000 UTC m=+1.227976814 container died c0fe18e12728dcbb4fa11734e9a40a7ab6765814b0769455047964535067ae78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lehmann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 23:21:41 np0005480824 systemd[1]: libpod-c0fe18e12728dcbb4fa11734e9a40a7ab6765814b0769455047964535067ae78.scope: Consumed 1.100s CPU time.
Oct 10 23:21:41 np0005480824 podman[98568]: 2025-10-11 03:21:41.164064805 +0000 UTC m=+0.077693387 container create ce3347daf92e5ffbe6cf81bed662fc2a56727cdfc5e9963313f369e2e90ec180 (image=quay.io/ceph/ceph:v18, name=thirsty_beaver, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 10 23:21:41 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d57766399c3d0b47066b859f915b81f5b52226c5756368e08e95a24309dea330-merged.mount: Deactivated successfully.
Oct 10 23:21:41 np0005480824 systemd[1]: Started libpod-conmon-ce3347daf92e5ffbe6cf81bed662fc2a56727cdfc5e9963313f369e2e90ec180.scope.
Oct 10 23:21:41 np0005480824 podman[98568]: 2025-10-11 03:21:41.127284266 +0000 UTC m=+0.040912888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:41 np0005480824 podman[98374]: 2025-10-11 03:21:41.222348789 +0000 UTC m=+1.300089491 container remove c0fe18e12728dcbb4fa11734e9a40a7ab6765814b0769455047964535067ae78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:41 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:41 np0005480824 systemd[1]: libpod-conmon-c0fe18e12728dcbb4fa11734e9a40a7ab6765814b0769455047964535067ae78.scope: Deactivated successfully.
Oct 10 23:21:41 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5cdecb2dee10cbbe9e4e691125c1218dc334ca70bfc854f150ee70b7bd2af9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:41 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5cdecb2dee10cbbe9e4e691125c1218dc334ca70bfc854f150ee70b7bd2af9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:21:41 np0005480824 podman[98568]: 2025-10-11 03:21:41.270353821 +0000 UTC m=+0.183982483 container init ce3347daf92e5ffbe6cf81bed662fc2a56727cdfc5e9963313f369e2e90ec180 (image=quay.io/ceph/ceph:v18, name=thirsty_beaver, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:41 np0005480824 podman[98568]: 2025-10-11 03:21:41.277722994 +0000 UTC m=+0.191351586 container start ce3347daf92e5ffbe6cf81bed662fc2a56727cdfc5e9963313f369e2e90ec180 (image=quay.io/ceph/ceph:v18, name=thirsty_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:21:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:21:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:41 np0005480824 podman[98568]: 2025-10-11 03:21:41.285702 +0000 UTC m=+0.199330612 container attach ce3347daf92e5ffbe6cf81bed662fc2a56727cdfc5e9963313f369e2e90ec180 (image=quay.io/ceph/ceph:v18, name=thirsty_beaver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:21:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct 10 23:21:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/785161395' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 10 23:21:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/785161395' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 10 23:21:42 np0005480824 systemd[1]: libpod-ce3347daf92e5ffbe6cf81bed662fc2a56727cdfc5e9963313f369e2e90ec180.scope: Deactivated successfully.
Oct 10 23:21:42 np0005480824 podman[98568]: 2025-10-11 03:21:42.012104296 +0000 UTC m=+0.925732958 container died ce3347daf92e5ffbe6cf81bed662fc2a56727cdfc5e9963313f369e2e90ec180 (image=quay.io/ceph/ceph:v18, name=thirsty_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 23:21:42 np0005480824 systemd[1]: var-lib-containers-storage-overlay-1c5cdecb2dee10cbbe9e4e691125c1218dc334ca70bfc854f150ee70b7bd2af9-merged.mount: Deactivated successfully.
Oct 10 23:21:42 np0005480824 podman[98568]: 2025-10-11 03:21:42.063442187 +0000 UTC m=+0.977070769 container remove ce3347daf92e5ffbe6cf81bed662fc2a56727cdfc5e9963313f369e2e90ec180 (image=quay.io/ceph/ceph:v18, name=thirsty_beaver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 23:21:42 np0005480824 systemd[1]: libpod-conmon-ce3347daf92e5ffbe6cf81bed662fc2a56727cdfc5e9963313f369e2e90ec180.scope: Deactivated successfully.
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/785161395' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/785161395' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 10 23:21:42 np0005480824 podman[98856]: 2025-10-11 03:21:42.299094736 +0000 UTC m=+0.078396254 container exec a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 10 23:21:42 np0005480824 podman[98856]: 2025-10-11 03:21:42.40701925 +0000 UTC m=+0.186320688 container exec_died a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 23:21:42 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Oct 10 23:21:42 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Oct 10 23:21:42 np0005480824 python3[98966]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:42 np0005480824 podman[99006]: 2025-10-11 03:21:42.870300053 +0000 UTC m=+0.042124296 container create 744759db3e0d3600fb7fce7c48bf4d34d006342cdce24f876bad8c06b67108d9 (image=quay.io/ceph/ceph:v18, name=elated_shaw, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:21:42 np0005480824 ceph-mgr[74617]: [progress INFO root] Writing back 10 completed events
Oct 10 23:21:42 np0005480824 systemd[1]: Started libpod-conmon-744759db3e0d3600fb7fce7c48bf4d34d006342cdce24f876bad8c06b67108d9.scope.
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:42 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 3f2f282d-c3e4-4af9-a8d2-4c3a81c43277 does not exist
Oct 10 23:21:42 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 9fcd8b26-6046-4e51-a38f-224539271485 does not exist
Oct 10 23:21:42 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 2984e22d-0d79-4d69-978a-4358e74b1a9a does not exist
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:21:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:21:42 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66a90f70dfe6c1ff9b0a7710e99c544915b68feae111301860a33de6488ac2c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66a90f70dfe6c1ff9b0a7710e99c544915b68feae111301860a33de6488ac2c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:42 np0005480824 podman[99006]: 2025-10-11 03:21:42.849462485 +0000 UTC m=+0.021286768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:42 np0005480824 podman[99006]: 2025-10-11 03:21:42.955173528 +0000 UTC m=+0.126997781 container init 744759db3e0d3600fb7fce7c48bf4d34d006342cdce24f876bad8c06b67108d9 (image=quay.io/ceph/ceph:v18, name=elated_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 10 23:21:42 np0005480824 podman[99006]: 2025-10-11 03:21:42.96169557 +0000 UTC m=+0.133519813 container start 744759db3e0d3600fb7fce7c48bf4d34d006342cdce24f876bad8c06b67108d9 (image=quay.io/ceph/ceph:v18, name=elated_shaw, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:21:42 np0005480824 podman[99006]: 2025-10-11 03:21:42.965299134 +0000 UTC m=+0.137123387 container attach 744759db3e0d3600fb7fce7c48bf4d34d006342cdce24f876bad8c06b67108d9 (image=quay.io/ceph/ceph:v18, name=elated_shaw, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 10 23:21:43 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:43 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:43 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:21:43 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:43 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:43 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:21:43 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct 10 23:21:43 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct 10 23:21:43 np0005480824 podman[99185]: 2025-10-11 03:21:43.467456325 +0000 UTC m=+0.041265255 container create 17fb4a21ef3db3104ecde2c8121317ab7c645a4c07cef188b81835d4af4b9629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hugle, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 23:21:43 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct 10 23:21:43 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct 10 23:21:43 np0005480824 systemd[1]: Started libpod-conmon-17fb4a21ef3db3104ecde2c8121317ab7c645a4c07cef188b81835d4af4b9629.scope.
Oct 10 23:21:43 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 10 23:21:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1978521491' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 23:21:43 np0005480824 elated_shaw[99024]: 
Oct 10 23:21:43 np0005480824 elated_shaw[99024]: {"fsid":"92cfe4d4-4917-5be1-9d00-73758793a62b","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":185,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":44,"num_osds":3,"num_up_osds":3,"osd_up_since":1760152846,"num_in_osds":3,"osd_in_since":1760152815,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84180992,"bytes_avail":64327745536,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2025-10-11T03:21:37.806706+0000","services":{"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct 10 23:21:43 np0005480824 podman[99185]: 2025-10-11 03:21:43.539018649 +0000 UTC m=+0.112827669 container init 17fb4a21ef3db3104ecde2c8121317ab7c645a4c07cef188b81835d4af4b9629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hugle, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:43 np0005480824 podman[99185]: 2025-10-11 03:21:43.448423661 +0000 UTC m=+0.022232591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:43 np0005480824 systemd[1]: libpod-744759db3e0d3600fb7fce7c48bf4d34d006342cdce24f876bad8c06b67108d9.scope: Deactivated successfully.
Oct 10 23:21:43 np0005480824 podman[99006]: 2025-10-11 03:21:43.545346337 +0000 UTC m=+0.717170610 container died 744759db3e0d3600fb7fce7c48bf4d34d006342cdce24f876bad8c06b67108d9 (image=quay.io/ceph/ceph:v18, name=elated_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 10 23:21:43 np0005480824 podman[99185]: 2025-10-11 03:21:43.5531733 +0000 UTC m=+0.126982260 container start 17fb4a21ef3db3104ecde2c8121317ab7c645a4c07cef188b81835d4af4b9629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 23:21:43 np0005480824 wonderful_hugle[99202]: 167 167
Oct 10 23:21:43 np0005480824 systemd[1]: libpod-17fb4a21ef3db3104ecde2c8121317ab7c645a4c07cef188b81835d4af4b9629.scope: Deactivated successfully.
Oct 10 23:21:43 np0005480824 podman[99185]: 2025-10-11 03:21:43.558943085 +0000 UTC m=+0.132752095 container attach 17fb4a21ef3db3104ecde2c8121317ab7c645a4c07cef188b81835d4af4b9629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 10 23:21:43 np0005480824 podman[99185]: 2025-10-11 03:21:43.559681012 +0000 UTC m=+0.133489972 container died 17fb4a21ef3db3104ecde2c8121317ab7c645a4c07cef188b81835d4af4b9629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 23:21:43 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a66a90f70dfe6c1ff9b0a7710e99c544915b68feae111301860a33de6488ac2c-merged.mount: Deactivated successfully.
Oct 10 23:21:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:21:43 np0005480824 podman[99006]: 2025-10-11 03:21:43.593658847 +0000 UTC m=+0.765483100 container remove 744759db3e0d3600fb7fce7c48bf4d34d006342cdce24f876bad8c06b67108d9 (image=quay.io/ceph/ceph:v18, name=elated_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 10 23:21:43 np0005480824 systemd[1]: var-lib-containers-storage-overlay-19a44464f364d442253a2c51add97399f0b94121cb3cd91ee349033fd79fddbe-merged.mount: Deactivated successfully.
Oct 10 23:21:43 np0005480824 podman[99185]: 2025-10-11 03:21:43.616105152 +0000 UTC m=+0.189914082 container remove 17fb4a21ef3db3104ecde2c8121317ab7c645a4c07cef188b81835d4af4b9629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hugle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 10 23:21:43 np0005480824 systemd[1]: libpod-conmon-17fb4a21ef3db3104ecde2c8121317ab7c645a4c07cef188b81835d4af4b9629.scope: Deactivated successfully.
Oct 10 23:21:43 np0005480824 systemd[1]: libpod-conmon-744759db3e0d3600fb7fce7c48bf4d34d006342cdce24f876bad8c06b67108d9.scope: Deactivated successfully.
Oct 10 23:21:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:43 np0005480824 podman[99239]: 2025-10-11 03:21:43.830127085 +0000 UTC m=+0.058549389 container create 58584c7d5132f7556a518633a7fc976acaf27f09c97add6d02fbe7843992c117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_proskuriakova, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:21:43 np0005480824 systemd[1]: Started libpod-conmon-58584c7d5132f7556a518633a7fc976acaf27f09c97add6d02fbe7843992c117.scope.
Oct 10 23:21:43 np0005480824 podman[99239]: 2025-10-11 03:21:43.809932623 +0000 UTC m=+0.038354977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:43 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9182df7d55d524c8e782538259048fdcfc39fa30f655b632418796dfb8688e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9182df7d55d524c8e782538259048fdcfc39fa30f655b632418796dfb8688e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9182df7d55d524c8e782538259048fdcfc39fa30f655b632418796dfb8688e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9182df7d55d524c8e782538259048fdcfc39fa30f655b632418796dfb8688e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9182df7d55d524c8e782538259048fdcfc39fa30f655b632418796dfb8688e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:43 np0005480824 podman[99239]: 2025-10-11 03:21:43.964563859 +0000 UTC m=+0.192986203 container init 58584c7d5132f7556a518633a7fc976acaf27f09c97add6d02fbe7843992c117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 10 23:21:43 np0005480824 podman[99239]: 2025-10-11 03:21:43.973179701 +0000 UTC m=+0.201602015 container start 58584c7d5132f7556a518633a7fc976acaf27f09c97add6d02fbe7843992c117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:43 np0005480824 podman[99239]: 2025-10-11 03:21:43.977684016 +0000 UTC m=+0.206106370 container attach 58584c7d5132f7556a518633a7fc976acaf27f09c97add6d02fbe7843992c117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:44 np0005480824 python3[99278]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:44 np0005480824 podman[99286]: 2025-10-11 03:21:44.159702692 +0000 UTC m=+0.070939950 container create 65861b60bab4c2fc3c3813bdfd5c99b112ed532434e8a5ab9299064cefad5028 (image=quay.io/ceph/ceph:v18, name=stupefied_banach, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:44 np0005480824 systemd[1]: Started libpod-conmon-65861b60bab4c2fc3c3813bdfd5c99b112ed532434e8a5ab9299064cefad5028.scope.
Oct 10 23:21:44 np0005480824 podman[99286]: 2025-10-11 03:21:44.139487679 +0000 UTC m=+0.050724967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:44 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0119073c2ff87da0dd1d5afb579c4c047ad0d353981baebf5f317d6f505c2011/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0119073c2ff87da0dd1d5afb579c4c047ad0d353981baebf5f317d6f505c2011/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:44 np0005480824 podman[99286]: 2025-10-11 03:21:44.267646196 +0000 UTC m=+0.178883524 container init 65861b60bab4c2fc3c3813bdfd5c99b112ed532434e8a5ab9299064cefad5028 (image=quay.io/ceph/ceph:v18, name=stupefied_banach, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:44 np0005480824 podman[99286]: 2025-10-11 03:21:44.277746262 +0000 UTC m=+0.188983560 container start 65861b60bab4c2fc3c3813bdfd5c99b112ed532434e8a5ab9299064cefad5028 (image=quay.io/ceph/ceph:v18, name=stupefied_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:44 np0005480824 podman[99286]: 2025-10-11 03:21:44.2819479 +0000 UTC m=+0.193185198 container attach 65861b60bab4c2fc3c3813bdfd5c99b112ed532434e8a5ab9299064cefad5028 (image=quay.io/ceph/ceph:v18, name=stupefied_banach, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:44 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Oct 10 23:21:44 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Oct 10 23:21:44 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct 10 23:21:44 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct 10 23:21:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:21:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1692391682' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:21:44 np0005480824 stupefied_banach[99302]: 
Oct 10 23:21:44 np0005480824 stupefied_banach[99302]: {"epoch":1,"fsid":"92cfe4d4-4917-5be1-9d00-73758793a62b","modified":"2025-10-11T03:18:32.741542Z","created":"2025-10-11T03:18:32.741542Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Oct 10 23:21:44 np0005480824 stupefied_banach[99302]: dumped monmap epoch 1
Oct 10 23:21:44 np0005480824 systemd[1]: libpod-65861b60bab4c2fc3c3813bdfd5c99b112ed532434e8a5ab9299064cefad5028.scope: Deactivated successfully.
Oct 10 23:21:44 np0005480824 podman[99286]: 2025-10-11 03:21:44.94823403 +0000 UTC m=+0.859471328 container died 65861b60bab4c2fc3c3813bdfd5c99b112ed532434e8a5ab9299064cefad5028 (image=quay.io/ceph/ceph:v18, name=stupefied_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 10 23:21:44 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0119073c2ff87da0dd1d5afb579c4c047ad0d353981baebf5f317d6f505c2011-merged.mount: Deactivated successfully.
Oct 10 23:21:44 np0005480824 podman[99286]: 2025-10-11 03:21:44.994836089 +0000 UTC m=+0.906073347 container remove 65861b60bab4c2fc3c3813bdfd5c99b112ed532434e8a5ab9299064cefad5028 (image=quay.io/ceph/ceph:v18, name=stupefied_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 23:21:45 np0005480824 systemd[1]: libpod-conmon-65861b60bab4c2fc3c3813bdfd5c99b112ed532434e8a5ab9299064cefad5028.scope: Deactivated successfully.
Oct 10 23:21:45 np0005480824 xenodochial_proskuriakova[99281]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:21:45 np0005480824 xenodochial_proskuriakova[99281]: --> relative data size: 1.0
Oct 10 23:21:45 np0005480824 xenodochial_proskuriakova[99281]: --> All data devices are unavailable
Oct 10 23:21:45 np0005480824 systemd[1]: libpod-58584c7d5132f7556a518633a7fc976acaf27f09c97add6d02fbe7843992c117.scope: Deactivated successfully.
Oct 10 23:21:45 np0005480824 systemd[1]: libpod-58584c7d5132f7556a518633a7fc976acaf27f09c97add6d02fbe7843992c117.scope: Consumed 1.027s CPU time.
Oct 10 23:21:45 np0005480824 podman[99239]: 2025-10-11 03:21:45.062119852 +0000 UTC m=+1.290542156 container died 58584c7d5132f7556a518633a7fc976acaf27f09c97add6d02fbe7843992c117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 23:21:45 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d9182df7d55d524c8e782538259048fdcfc39fa30f655b632418796dfb8688e2-merged.mount: Deactivated successfully.
Oct 10 23:21:45 np0005480824 podman[99239]: 2025-10-11 03:21:45.111300433 +0000 UTC m=+1.339722747 container remove 58584c7d5132f7556a518633a7fc976acaf27f09c97add6d02fbe7843992c117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 10 23:21:45 np0005480824 systemd[1]: libpod-conmon-58584c7d5132f7556a518633a7fc976acaf27f09c97add6d02fbe7843992c117.scope: Deactivated successfully.
Oct 10 23:21:45 np0005480824 python3[99496]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:45 np0005480824 podman[99501]: 2025-10-11 03:21:45.614005798 +0000 UTC m=+0.057552238 container create 928819b302383f0658cda04d0d8187004b63229b91806442e2f4c8ce65c0086b (image=quay.io/ceph/ceph:v18, name=strange_matsumoto, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:45 np0005480824 systemd[1]: Started libpod-conmon-928819b302383f0658cda04d0d8187004b63229b91806442e2f4c8ce65c0086b.scope.
Oct 10 23:21:45 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3efc978381571460be0f9180117997b0cc0b10140595670c38229489bd0eae8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3efc978381571460be0f9180117997b0cc0b10140595670c38229489bd0eae8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:45 np0005480824 podman[99501]: 2025-10-11 03:21:45.59530522 +0000 UTC m=+0.038851630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:45 np0005480824 podman[99501]: 2025-10-11 03:21:45.697133871 +0000 UTC m=+0.140680291 container init 928819b302383f0658cda04d0d8187004b63229b91806442e2f4c8ce65c0086b (image=quay.io/ceph/ceph:v18, name=strange_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:45 np0005480824 podman[99501]: 2025-10-11 03:21:45.702926486 +0000 UTC m=+0.146472886 container start 928819b302383f0658cda04d0d8187004b63229b91806442e2f4c8ce65c0086b (image=quay.io/ceph/ceph:v18, name=strange_matsumoto, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:45 np0005480824 podman[99501]: 2025-10-11 03:21:45.70695117 +0000 UTC m=+0.150497600 container attach 928819b302383f0658cda04d0d8187004b63229b91806442e2f4c8ce65c0086b (image=quay.io/ceph/ceph:v18, name=strange_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:45 np0005480824 podman[99556]: 2025-10-11 03:21:45.856001175 +0000 UTC m=+0.063129327 container create 2be2fe95d53f8ec12ef7ff751ef017039b8a89063cbb82bc11fcb750019c0e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_villani, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 23:21:45 np0005480824 systemd[1]: Started libpod-conmon-2be2fe95d53f8ec12ef7ff751ef017039b8a89063cbb82bc11fcb750019c0e52.scope.
Oct 10 23:21:45 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:45 np0005480824 podman[99556]: 2025-10-11 03:21:45.831748629 +0000 UTC m=+0.038876841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:45 np0005480824 podman[99556]: 2025-10-11 03:21:45.93528084 +0000 UTC m=+0.142409062 container init 2be2fe95d53f8ec12ef7ff751ef017039b8a89063cbb82bc11fcb750019c0e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_villani, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:45 np0005480824 podman[99556]: 2025-10-11 03:21:45.945921839 +0000 UTC m=+0.153050001 container start 2be2fe95d53f8ec12ef7ff751ef017039b8a89063cbb82bc11fcb750019c0e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:45 np0005480824 compassionate_villani[99573]: 167 167
Oct 10 23:21:45 np0005480824 systemd[1]: libpod-2be2fe95d53f8ec12ef7ff751ef017039b8a89063cbb82bc11fcb750019c0e52.scope: Deactivated successfully.
Oct 10 23:21:45 np0005480824 podman[99556]: 2025-10-11 03:21:45.951474149 +0000 UTC m=+0.158602321 container attach 2be2fe95d53f8ec12ef7ff751ef017039b8a89063cbb82bc11fcb750019c0e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_villani, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 23:21:45 np0005480824 podman[99556]: 2025-10-11 03:21:45.952501582 +0000 UTC m=+0.159629784 container died 2be2fe95d53f8ec12ef7ff751ef017039b8a89063cbb82bc11fcb750019c0e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_villani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:45 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6e7d5ccd5582e369550d2b2221dfb30f009977f6beb3387532bfdc92912565e4-merged.mount: Deactivated successfully.
Oct 10 23:21:46 np0005480824 podman[99556]: 2025-10-11 03:21:46.002550772 +0000 UTC m=+0.209678934 container remove 2be2fe95d53f8ec12ef7ff751ef017039b8a89063cbb82bc11fcb750019c0e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_villani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:46 np0005480824 systemd[1]: libpod-conmon-2be2fe95d53f8ec12ef7ff751ef017039b8a89063cbb82bc11fcb750019c0e52.scope: Deactivated successfully.
Oct 10 23:21:46 np0005480824 podman[99616]: 2025-10-11 03:21:46.182266304 +0000 UTC m=+0.047094091 container create 0b8945e37bd2a16ce4bbbb5e219004c9ec5e657626b385e34b4d288c45fa6e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:21:46 np0005480824 systemd[1]: Started libpod-conmon-0b8945e37bd2a16ce4bbbb5e219004c9ec5e657626b385e34b4d288c45fa6e90.scope.
Oct 10 23:21:46 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:46 np0005480824 podman[99616]: 2025-10-11 03:21:46.15639167 +0000 UTC m=+0.021219457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d760dd8b1bf04ea82a0b7e0b112df70f79413f576b8547938aa6655def8f6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d760dd8b1bf04ea82a0b7e0b112df70f79413f576b8547938aa6655def8f6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d760dd8b1bf04ea82a0b7e0b112df70f79413f576b8547938aa6655def8f6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d760dd8b1bf04ea82a0b7e0b112df70f79413f576b8547938aa6655def8f6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:46 np0005480824 podman[99616]: 2025-10-11 03:21:46.283998433 +0000 UTC m=+0.148826190 container init 0b8945e37bd2a16ce4bbbb5e219004c9ec5e657626b385e34b4d288c45fa6e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 23:21:46 np0005480824 podman[99616]: 2025-10-11 03:21:46.297072689 +0000 UTC m=+0.161900446 container start 0b8945e37bd2a16ce4bbbb5e219004c9ec5e657626b385e34b4d288c45fa6e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:46 np0005480824 podman[99616]: 2025-10-11 03:21:46.300866918 +0000 UTC m=+0.165694775 container attach 0b8945e37bd2a16ce4bbbb5e219004c9ec5e657626b385e34b4d288c45fa6e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:21:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct 10 23:21:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3926216987' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 10 23:21:46 np0005480824 strange_matsumoto[99538]: [client.openstack]
Oct 10 23:21:46 np0005480824 strange_matsumoto[99538]: #011key = AQBrzOloAAAAABAA1OldBZKsL7IaXlw8/JwZqQ==
Oct 10 23:21:46 np0005480824 strange_matsumoto[99538]: #011caps mgr = "allow *"
Oct 10 23:21:46 np0005480824 strange_matsumoto[99538]: #011caps mon = "profile rbd"
Oct 10 23:21:46 np0005480824 strange_matsumoto[99538]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct 10 23:21:46 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3926216987' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 10 23:21:46 np0005480824 systemd[1]: libpod-928819b302383f0658cda04d0d8187004b63229b91806442e2f4c8ce65c0086b.scope: Deactivated successfully.
Oct 10 23:21:46 np0005480824 podman[99501]: 2025-10-11 03:21:46.33129408 +0000 UTC m=+0.774840490 container died 928819b302383f0658cda04d0d8187004b63229b91806442e2f4c8ce65c0086b (image=quay.io/ceph/ceph:v18, name=strange_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 10 23:21:46 np0005480824 systemd[1]: var-lib-containers-storage-overlay-b3efc978381571460be0f9180117997b0cc0b10140595670c38229489bd0eae8-merged.mount: Deactivated successfully.
Oct 10 23:21:46 np0005480824 podman[99501]: 2025-10-11 03:21:46.390001502 +0000 UTC m=+0.833547942 container remove 928819b302383f0658cda04d0d8187004b63229b91806442e2f4c8ce65c0086b (image=quay.io/ceph/ceph:v18, name=strange_matsumoto, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:46 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct 10 23:21:46 np0005480824 systemd[1]: libpod-conmon-928819b302383f0658cda04d0d8187004b63229b91806442e2f4c8ce65c0086b.scope: Deactivated successfully.
Oct 10 23:21:46 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct 10 23:21:46 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct 10 23:21:46 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]: {
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:    "0": [
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:        {
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "devices": [
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "/dev/loop3"
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            ],
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_name": "ceph_lv0",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_size": "21470642176",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "name": "ceph_lv0",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "tags": {
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.cluster_name": "ceph",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.crush_device_class": "",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.encrypted": "0",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.osd_id": "0",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.type": "block",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.vdo": "0"
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            },
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "type": "block",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "vg_name": "ceph_vg0"
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:        }
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:    ],
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:    "1": [
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:        {
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "devices": [
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "/dev/loop4"
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            ],
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_name": "ceph_lv1",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_size": "21470642176",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "name": "ceph_lv1",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "tags": {
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.cluster_name": "ceph",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.crush_device_class": "",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.encrypted": "0",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.osd_id": "1",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.type": "block",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.vdo": "0"
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            },
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "type": "block",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "vg_name": "ceph_vg1"
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:        }
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:    ],
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:    "2": [
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:        {
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "devices": [
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "/dev/loop5"
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            ],
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_name": "ceph_lv2",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_size": "21470642176",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "name": "ceph_lv2",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "tags": {
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.cluster_name": "ceph",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.crush_device_class": "",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.encrypted": "0",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.osd_id": "2",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.type": "block",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:                "ceph.vdo": "0"
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            },
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "type": "block",
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:            "vg_name": "ceph_vg2"
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:        }
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]:    ]
Oct 10 23:21:47 np0005480824 angry_wozniak[99633]: }
Oct 10 23:21:47 np0005480824 systemd[1]: libpod-0b8945e37bd2a16ce4bbbb5e219004c9ec5e657626b385e34b4d288c45fa6e90.scope: Deactivated successfully.
Oct 10 23:21:47 np0005480824 podman[99616]: 2025-10-11 03:21:47.11779526 +0000 UTC m=+0.982623047 container died 0b8945e37bd2a16ce4bbbb5e219004c9ec5e657626b385e34b4d288c45fa6e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:21:47 np0005480824 systemd[1]: var-lib-containers-storage-overlay-08d760dd8b1bf04ea82a0b7e0b112df70f79413f576b8547938aa6655def8f6d-merged.mount: Deactivated successfully.
Oct 10 23:21:47 np0005480824 podman[99616]: 2025-10-11 03:21:47.177901324 +0000 UTC m=+1.042729071 container remove 0b8945e37bd2a16ce4bbbb5e219004c9ec5e657626b385e34b4d288c45fa6e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:47 np0005480824 systemd[1]: libpod-conmon-0b8945e37bd2a16ce4bbbb5e219004c9ec5e657626b385e34b4d288c45fa6e90.scope: Deactivated successfully.
Oct 10 23:21:47 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Oct 10 23:21:47 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Oct 10 23:21:47 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct 10 23:21:47 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct 10 23:21:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:47 np0005480824 podman[99946]: 2025-10-11 03:21:47.934341603 +0000 UTC m=+0.068292668 container create ad1931a44d65cdb8ecced3b0eb83892caf7c38feb0ddff0a35b8e5245fec4869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ganguly, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:47 np0005480824 systemd[1]: Started libpod-conmon-ad1931a44d65cdb8ecced3b0eb83892caf7c38feb0ddff0a35b8e5245fec4869.scope.
Oct 10 23:21:47 np0005480824 podman[99946]: 2025-10-11 03:21:47.907251429 +0000 UTC m=+0.041202554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:48 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:48 np0005480824 podman[99946]: 2025-10-11 03:21:48.023464366 +0000 UTC m=+0.157415411 container init ad1931a44d65cdb8ecced3b0eb83892caf7c38feb0ddff0a35b8e5245fec4869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 10 23:21:48 np0005480824 podman[99946]: 2025-10-11 03:21:48.036155173 +0000 UTC m=+0.170106218 container start ad1931a44d65cdb8ecced3b0eb83892caf7c38feb0ddff0a35b8e5245fec4869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ganguly, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:48 np0005480824 ansible-async_wrapper.py[99970]: Invoked with j732264196732 30 /home/zuul/.ansible/tmp/ansible-tmp-1760152907.525395-33309-59200473178867/AnsiballZ_command.py _
Oct 10 23:21:48 np0005480824 podman[99946]: 2025-10-11 03:21:48.040668429 +0000 UTC m=+0.174619504 container attach ad1931a44d65cdb8ecced3b0eb83892caf7c38feb0ddff0a35b8e5245fec4869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ganguly, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:21:48 np0005480824 relaxed_ganguly[99976]: 167 167
Oct 10 23:21:48 np0005480824 systemd[1]: libpod-ad1931a44d65cdb8ecced3b0eb83892caf7c38feb0ddff0a35b8e5245fec4869.scope: Deactivated successfully.
Oct 10 23:21:48 np0005480824 podman[99946]: 2025-10-11 03:21:48.044235002 +0000 UTC m=+0.178186057 container died ad1931a44d65cdb8ecced3b0eb83892caf7c38feb0ddff0a35b8e5245fec4869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ganguly, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:21:48 np0005480824 ansible-async_wrapper.py[99985]: Starting module and watcher
Oct 10 23:21:48 np0005480824 ansible-async_wrapper.py[99985]: Start watching 99986 (30)
Oct 10 23:21:48 np0005480824 ansible-async_wrapper.py[99986]: Start module (99986)
Oct 10 23:21:48 np0005480824 ansible-async_wrapper.py[99970]: Return async_wrapper task started.
Oct 10 23:21:48 np0005480824 systemd[1]: var-lib-containers-storage-overlay-dcc0e69a5d16d9d45bc20d62451c6c869443a0ba79b5f3b3e11403707cf26f34-merged.mount: Deactivated successfully.
Oct 10 23:21:48 np0005480824 podman[99946]: 2025-10-11 03:21:48.085874506 +0000 UTC m=+0.219825581 container remove ad1931a44d65cdb8ecced3b0eb83892caf7c38feb0ddff0a35b8e5245fec4869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ganguly, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 10 23:21:48 np0005480824 systemd[1]: libpod-conmon-ad1931a44d65cdb8ecced3b0eb83892caf7c38feb0ddff0a35b8e5245fec4869.scope: Deactivated successfully.
Oct 10 23:21:48 np0005480824 python3[99988]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:48 np0005480824 podman[100005]: 2025-10-11 03:21:48.25584717 +0000 UTC m=+0.049047178 container create a6b79a82111842937707e41f2f57743f50649286a7dbd61468e944f0ce116273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 23:21:48 np0005480824 podman[100007]: 2025-10-11 03:21:48.27294066 +0000 UTC m=+0.053762029 container create 1302a944930eb3cf76edce78714e5e9e4048f61d0fc9616d0b93c03061b45048 (image=quay.io/ceph/ceph:v18, name=relaxed_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:48 np0005480824 systemd[1]: Started libpod-conmon-a6b79a82111842937707e41f2f57743f50649286a7dbd61468e944f0ce116273.scope.
Oct 10 23:21:48 np0005480824 systemd[1]: Started libpod-conmon-1302a944930eb3cf76edce78714e5e9e4048f61d0fc9616d0b93c03061b45048.scope.
Oct 10 23:21:48 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:48 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4d87ccb57350b17b38b30419df8463e785200c0f4b9f41c4dcfed004a83a39/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4d87ccb57350b17b38b30419df8463e785200c0f4b9f41c4dcfed004a83a39/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:48 np0005480824 podman[100005]: 2025-10-11 03:21:48.237187754 +0000 UTC m=+0.030387822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820deaacd2ba0f6963fc1aa6d9fc3a64ac201832e280c644e267ae68f3269418/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820deaacd2ba0f6963fc1aa6d9fc3a64ac201832e280c644e267ae68f3269418/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820deaacd2ba0f6963fc1aa6d9fc3a64ac201832e280c644e267ae68f3269418/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820deaacd2ba0f6963fc1aa6d9fc3a64ac201832e280c644e267ae68f3269418/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:48 np0005480824 podman[100007]: 2025-10-11 03:21:48.244409252 +0000 UTC m=+0.025230641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:48 np0005480824 podman[100005]: 2025-10-11 03:21:48.346286335 +0000 UTC m=+0.139486333 container init a6b79a82111842937707e41f2f57743f50649286a7dbd61468e944f0ce116273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:48 np0005480824 podman[100007]: 2025-10-11 03:21:48.351289082 +0000 UTC m=+0.132110481 container init 1302a944930eb3cf76edce78714e5e9e4048f61d0fc9616d0b93c03061b45048 (image=quay.io/ceph/ceph:v18, name=relaxed_rosalind, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:48 np0005480824 podman[100005]: 2025-10-11 03:21:48.353999515 +0000 UTC m=+0.147199483 container start a6b79a82111842937707e41f2f57743f50649286a7dbd61468e944f0ce116273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:21:48 np0005480824 podman[100005]: 2025-10-11 03:21:48.357063087 +0000 UTC m=+0.150263095 container attach a6b79a82111842937707e41f2f57743f50649286a7dbd61468e944f0ce116273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chandrasekhar, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:21:48 np0005480824 podman[100007]: 2025-10-11 03:21:48.359898143 +0000 UTC m=+0.140719512 container start 1302a944930eb3cf76edce78714e5e9e4048f61d0fc9616d0b93c03061b45048 (image=quay.io/ceph/ceph:v18, name=relaxed_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:21:48 np0005480824 podman[100007]: 2025-10-11 03:21:48.364821128 +0000 UTC m=+0.145642527 container attach 1302a944930eb3cf76edce78714e5e9e4048f61d0fc9616d0b93c03061b45048 (image=quay.io/ceph/ceph:v18, name=relaxed_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:48 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 10 23:21:48 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 10 23:21:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:21:48 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 23:21:48 np0005480824 relaxed_rosalind[100038]: 
Oct 10 23:21:48 np0005480824 relaxed_rosalind[100038]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 10 23:21:48 np0005480824 systemd[1]: libpod-1302a944930eb3cf76edce78714e5e9e4048f61d0fc9616d0b93c03061b45048.scope: Deactivated successfully.
Oct 10 23:21:48 np0005480824 podman[100007]: 2025-10-11 03:21:48.885280088 +0000 UTC m=+0.666101467 container died 1302a944930eb3cf76edce78714e5e9e4048f61d0fc9616d0b93c03061b45048 (image=quay.io/ceph/ceph:v18, name=relaxed_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:48 np0005480824 systemd[1]: var-lib-containers-storage-overlay-5b4d87ccb57350b17b38b30419df8463e785200c0f4b9f41c4dcfed004a83a39-merged.mount: Deactivated successfully.
Oct 10 23:21:48 np0005480824 podman[100007]: 2025-10-11 03:21:48.936111686 +0000 UTC m=+0.716933055 container remove 1302a944930eb3cf76edce78714e5e9e4048f61d0fc9616d0b93c03061b45048 (image=quay.io/ceph/ceph:v18, name=relaxed_rosalind, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:48 np0005480824 systemd[1]: libpod-conmon-1302a944930eb3cf76edce78714e5e9e4048f61d0fc9616d0b93c03061b45048.scope: Deactivated successfully.
Oct 10 23:21:48 np0005480824 ansible-async_wrapper.py[99986]: Module complete (99986)
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]: {
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "osd_id": 0,
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "type": "bluestore"
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:    },
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "osd_id": 1,
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "type": "bluestore"
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:    },
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "osd_id": 2,
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:        "type": "bluestore"
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]:    }
Oct 10 23:21:49 np0005480824 inspiring_chandrasekhar[100036]: }
Oct 10 23:21:49 np0005480824 systemd[1]: libpod-a6b79a82111842937707e41f2f57743f50649286a7dbd61468e944f0ce116273.scope: Deactivated successfully.
Oct 10 23:21:49 np0005480824 podman[100005]: 2025-10-11 03:21:49.263307127 +0000 UTC m=+1.056507095 container died a6b79a82111842937707e41f2f57743f50649286a7dbd61468e944f0ce116273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chandrasekhar, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:21:49 np0005480824 systemd[1]: var-lib-containers-storage-overlay-820deaacd2ba0f6963fc1aa6d9fc3a64ac201832e280c644e267ae68f3269418-merged.mount: Deactivated successfully.
Oct 10 23:21:49 np0005480824 podman[100005]: 2025-10-11 03:21:49.313142613 +0000 UTC m=+1.106342581 container remove a6b79a82111842937707e41f2f57743f50649286a7dbd61468e944f0ce116273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chandrasekhar, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:21:49 np0005480824 systemd[1]: libpod-conmon-a6b79a82111842937707e41f2f57743f50649286a7dbd61468e944f0ce116273.scope: Deactivated successfully.
Oct 10 23:21:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:21:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:21:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:49 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev 1c04c891-2600-45e2-b2c6-05cf194f35cc (Updating rgw.rgw deployment (+1 -> 1))
Oct 10 23:21:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.bqunnq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct 10 23:21:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.bqunnq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 23:21:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.bqunnq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 23:21:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct 10 23:21:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:21:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:21:49 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct 10 23:21:49 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.bqunnq on compute-0
Oct 10 23:21:49 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.bqunnq on compute-0
Oct 10 23:21:49 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct 10 23:21:49 np0005480824 python3[100157]: ansible-ansible.legacy.async_status Invoked with jid=j732264196732.99970 mode=status _async_dir=/root/.ansible_async
Oct 10 23:21:49 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct 10 23:21:49 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct 10 23:21:49 np0005480824 python3[100291]: ansible-ansible.legacy.async_status Invoked with jid=j732264196732.99970 mode=cleanup _async_dir=/root/.ansible_async
Oct 10 23:21:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:49 np0005480824 podman[100361]: 2025-10-11 03:21:49.915236131 +0000 UTC m=+0.040110309 container create f23ce18c78475d0ab47e0322437780120da3907c7ae0da1e6d659a578840c141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 10 23:21:49 np0005480824 systemd[1]: Started libpod-conmon-f23ce18c78475d0ab47e0322437780120da3907c7ae0da1e6d659a578840c141.scope.
Oct 10 23:21:49 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:49 np0005480824 podman[100361]: 2025-10-11 03:21:49.984349987 +0000 UTC m=+0.109224145 container init f23ce18c78475d0ab47e0322437780120da3907c7ae0da1e6d659a578840c141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:49 np0005480824 podman[100361]: 2025-10-11 03:21:49.99051304 +0000 UTC m=+0.115387188 container start f23ce18c78475d0ab47e0322437780120da3907c7ae0da1e6d659a578840c141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 10 23:21:49 np0005480824 podman[100361]: 2025-10-11 03:21:49.89553777 +0000 UTC m=+0.020411948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:49 np0005480824 objective_boyd[100377]: 167 167
Oct 10 23:21:49 np0005480824 systemd[1]: libpod-f23ce18c78475d0ab47e0322437780120da3907c7ae0da1e6d659a578840c141.scope: Deactivated successfully.
Oct 10 23:21:49 np0005480824 podman[100361]: 2025-10-11 03:21:49.994695548 +0000 UTC m=+0.119569716 container attach f23ce18c78475d0ab47e0322437780120da3907c7ae0da1e6d659a578840c141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_boyd, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:49 np0005480824 podman[100361]: 2025-10-11 03:21:49.995814005 +0000 UTC m=+0.120688183 container died f23ce18c78475d0ab47e0322437780120da3907c7ae0da1e6d659a578840c141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_boyd, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:50 np0005480824 systemd[1]: var-lib-containers-storage-overlay-5ec1657d436676cdc086e10539d232b47513b7bda12084ed910c4c27f757cc8d-merged.mount: Deactivated successfully.
Oct 10 23:21:50 np0005480824 podman[100361]: 2025-10-11 03:21:50.033272191 +0000 UTC m=+0.158146359 container remove f23ce18c78475d0ab47e0322437780120da3907c7ae0da1e6d659a578840c141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 23:21:50 np0005480824 systemd[1]: libpod-conmon-f23ce18c78475d0ab47e0322437780120da3907c7ae0da1e6d659a578840c141.scope: Deactivated successfully.
Oct 10 23:21:50 np0005480824 systemd[1]: Reloading.
Oct 10 23:21:50 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:21:50 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:21:50 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:50 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:50 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.bqunnq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 23:21:50 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.bqunnq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 23:21:50 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:50 np0005480824 ceph-mon[74326]: Deploying daemon rgw.rgw.compute-0.bqunnq on compute-0
Oct 10 23:21:50 np0005480824 systemd[1]: Reloading.
Oct 10 23:21:50 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:21:50 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:21:50 np0005480824 python3[100460]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:50 np0005480824 podman[100499]: 2025-10-11 03:21:50.567387389 +0000 UTC m=+0.047394329 container create afaae4fabcc2c7f364b32119ce395762dc06a7b03d5d1d22789f0dd2289d87ee (image=quay.io/ceph/ceph:v18, name=infallible_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:21:50 np0005480824 podman[100499]: 2025-10-11 03:21:50.547999266 +0000 UTC m=+0.028006256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:50 np0005480824 systemd[1]: Started libpod-conmon-afaae4fabcc2c7f364b32119ce395762dc06a7b03d5d1d22789f0dd2289d87ee.scope.
Oct 10 23:21:50 np0005480824 systemd[1]: Starting Ceph rgw.rgw.compute-0.bqunnq for 92cfe4d4-4917-5be1-9d00-73758793a62b...
Oct 10 23:21:50 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57a33ec41205868d8e4361147400c6c2afe26e35717fa304c0b47e692ad2bd1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57a33ec41205868d8e4361147400c6c2afe26e35717fa304c0b47e692ad2bd1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:50 np0005480824 podman[100499]: 2025-10-11 03:21:50.69443172 +0000 UTC m=+0.174438740 container init afaae4fabcc2c7f364b32119ce395762dc06a7b03d5d1d22789f0dd2289d87ee (image=quay.io/ceph/ceph:v18, name=infallible_shtern, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 10 23:21:50 np0005480824 podman[100499]: 2025-10-11 03:21:50.701524536 +0000 UTC m=+0.181531466 container start afaae4fabcc2c7f364b32119ce395762dc06a7b03d5d1d22789f0dd2289d87ee (image=quay.io/ceph/ceph:v18, name=infallible_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:50 np0005480824 podman[100499]: 2025-10-11 03:21:50.704568807 +0000 UTC m=+0.184575797 container attach afaae4fabcc2c7f364b32119ce395762dc06a7b03d5d1d22789f0dd2289d87ee (image=quay.io/ceph/ceph:v18, name=infallible_shtern, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 23:21:50 np0005480824 podman[100569]: 2025-10-11 03:21:50.919531834 +0000 UTC m=+0.045578328 container create 25ff1f079ca6d5d68281bf95dc2311e742798de4e5895710535ed7a1e9682f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-rgw-rgw-compute-0-bqunnq, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaba459bb001adaae8069b1b44a0c2a1c06efc482179810a627870a4dc78e63e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaba459bb001adaae8069b1b44a0c2a1c06efc482179810a627870a4dc78e63e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaba459bb001adaae8069b1b44a0c2a1c06efc482179810a627870a4dc78e63e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaba459bb001adaae8069b1b44a0c2a1c06efc482179810a627870a4dc78e63e/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.bqunnq supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:50 np0005480824 podman[100569]: 2025-10-11 03:21:50.97327334 +0000 UTC m=+0.099319854 container init 25ff1f079ca6d5d68281bf95dc2311e742798de4e5895710535ed7a1e9682f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-rgw-rgw-compute-0-bqunnq, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:50 np0005480824 podman[100569]: 2025-10-11 03:21:50.98268895 +0000 UTC m=+0.108735444 container start 25ff1f079ca6d5d68281bf95dc2311e742798de4e5895710535ed7a1e9682f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-rgw-rgw-compute-0-bqunnq, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 23:21:50 np0005480824 bash[100569]: 25ff1f079ca6d5d68281bf95dc2311e742798de4e5895710535ed7a1e9682f60
Oct 10 23:21:50 np0005480824 podman[100569]: 2025-10-11 03:21:50.903212742 +0000 UTC m=+0.029259256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:50 np0005480824 systemd[1]: Started Ceph rgw.rgw.compute-0.bqunnq for 92cfe4d4-4917-5be1-9d00-73758793a62b.
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 10 23:21:51 np0005480824 radosgw[100589]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 10 23:21:51 np0005480824 radosgw[100589]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct 10 23:21:51 np0005480824 radosgw[100589]: framework: beast
Oct 10 23:21:51 np0005480824 radosgw[100589]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct 10 23:21:51 np0005480824 radosgw[100589]: init_numa not setting numa affinity
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:51 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev 1c04c891-2600-45e2-b2c6-05cf194f35cc (Updating rgw.rgw deployment (+1 -> 1))
Oct 10 23:21:51 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 1c04c891-2600-45e2-b2c6-05cf194f35cc (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Oct 10 23:21:51 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Oct 10 23:21:51 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:51 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev 1ef6f802-5a8d-466b-8df5-37ae778888f0 (Updating mds.cephfs deployment (+1 -> 1))
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.uxaxgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.uxaxgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.uxaxgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:21:51 np0005480824 ceph-mgr[74617]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.uxaxgb on compute-0
Oct 10 23:21:51 np0005480824 ceph-mgr[74617]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.uxaxgb on compute-0
Oct 10 23:21:51 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 23:21:51 np0005480824 infallible_shtern[100517]: 
Oct 10 23:21:51 np0005480824 infallible_shtern[100517]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 10 23:21:51 np0005480824 systemd[1]: libpod-afaae4fabcc2c7f364b32119ce395762dc06a7b03d5d1d22789f0dd2289d87ee.scope: Deactivated successfully.
Oct 10 23:21:51 np0005480824 podman[100499]: 2025-10-11 03:21:51.249220857 +0000 UTC m=+0.729227797 container died afaae4fabcc2c7f364b32119ce395762dc06a7b03d5d1d22789f0dd2289d87ee (image=quay.io/ceph/ceph:v18, name=infallible_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:51 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d57a33ec41205868d8e4361147400c6c2afe26e35717fa304c0b47e692ad2bd1-merged.mount: Deactivated successfully.
Oct 10 23:21:51 np0005480824 podman[100499]: 2025-10-11 03:21:51.292268572 +0000 UTC m=+0.772275542 container remove afaae4fabcc2c7f364b32119ce395762dc06a7b03d5d1d22789f0dd2289d87ee (image=quay.io/ceph/ceph:v18, name=infallible_shtern, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 23:21:51 np0005480824 systemd[1]: libpod-conmon-afaae4fabcc2c7f364b32119ce395762dc06a7b03d5d1d22789f0dd2289d87ee.scope: Deactivated successfully.
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.uxaxgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 10 23:21:51 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.uxaxgb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 10 23:21:51 np0005480824 podman[100825]: 2025-10-11 03:21:51.793708428 +0000 UTC m=+0.053458749 container create 8b3a68f5fbb4f951c0412b1c0c9ddcbf063c0353030f1efbedb3d003d440d207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:51 np0005480824 systemd[1]: Started libpod-conmon-8b3a68f5fbb4f951c0412b1c0c9ddcbf063c0353030f1efbedb3d003d440d207.scope.
Oct 10 23:21:51 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:51 np0005480824 podman[100825]: 2025-10-11 03:21:51.769038569 +0000 UTC m=+0.028788950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:51 np0005480824 podman[100825]: 2025-10-11 03:21:51.877319968 +0000 UTC m=+0.137070269 container init 8b3a68f5fbb4f951c0412b1c0c9ddcbf063c0353030f1efbedb3d003d440d207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cohen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:51 np0005480824 podman[100825]: 2025-10-11 03:21:51.883545069 +0000 UTC m=+0.143295360 container start 8b3a68f5fbb4f951c0412b1c0c9ddcbf063c0353030f1efbedb3d003d440d207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 10 23:21:51 np0005480824 podman[100825]: 2025-10-11 03:21:51.887130176 +0000 UTC m=+0.146880557 container attach 8b3a68f5fbb4f951c0412b1c0c9ddcbf063c0353030f1efbedb3d003d440d207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:51 np0005480824 eager_cohen[100841]: 167 167
Oct 10 23:21:51 np0005480824 systemd[1]: libpod-8b3a68f5fbb4f951c0412b1c0c9ddcbf063c0353030f1efbedb3d003d440d207.scope: Deactivated successfully.
Oct 10 23:21:51 np0005480824 podman[100825]: 2025-10-11 03:21:51.891517593 +0000 UTC m=+0.151267884 container died 8b3a68f5fbb4f951c0412b1c0c9ddcbf063c0353030f1efbedb3d003d440d207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cohen, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:51 np0005480824 systemd[1]: var-lib-containers-storage-overlay-29ae649c10e879be3df5d1786e80c93a2be0271c72247aee93915c8a0021f2c4-merged.mount: Deactivated successfully.
Oct 10 23:21:51 np0005480824 podman[100825]: 2025-10-11 03:21:51.923250173 +0000 UTC m=+0.183000484 container remove 8b3a68f5fbb4f951c0412b1c0c9ddcbf063c0353030f1efbedb3d003d440d207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cohen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 10 23:21:51 np0005480824 systemd[1]: libpod-conmon-8b3a68f5fbb4f951c0412b1c0c9ddcbf063c0353030f1efbedb3d003d440d207.scope: Deactivated successfully.
Oct 10 23:21:51 np0005480824 systemd[1]: Reloading.
Oct 10 23:21:52 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:21:52 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:21:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct 10 23:21:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct 10 23:21:52 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct 10 23:21:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct 10 23:21:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3299867884' entity='client.rgw.rgw.compute-0.bqunnq' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 10 23:21:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 45 pg[8.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:52 np0005480824 systemd[1]: Reloading.
Oct 10 23:21:52 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:21:52 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:21:52 np0005480824 python3[100922]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:52 np0005480824 ceph-mon[74326]: Saving service rgw.rgw spec with placement compute-0
Oct 10 23:21:52 np0005480824 ceph-mon[74326]: Deploying daemon mds.cephfs.compute-0.uxaxgb on compute-0
Oct 10 23:21:52 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3299867884' entity='client.rgw.rgw.compute-0.bqunnq' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 10 23:21:52 np0005480824 podman[100960]: 2025-10-11 03:21:52.436111916 +0000 UTC m=+0.036786204 container create 9b9db525b6dcb8a081da935c7c4d5eb1b7c4f0217b2cd0ba7bca03c76ac40951 (image=quay.io/ceph/ceph:v18, name=relaxed_proskuriakova, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:52 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct 10 23:21:52 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct 10 23:21:52 np0005480824 podman[100960]: 2025-10-11 03:21:52.419765578 +0000 UTC m=+0.020439896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:52 np0005480824 systemd[1]: Started libpod-conmon-9b9db525b6dcb8a081da935c7c4d5eb1b7c4f0217b2cd0ba7bca03c76ac40951.scope.
Oct 10 23:21:52 np0005480824 systemd[1]: Starting Ceph mds.cephfs.compute-0.uxaxgb for 92cfe4d4-4917-5be1-9d00-73758793a62b...
Oct 10 23:21:52 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2edd9d1acb24102f212d26d7e0444bb68d6af84c9748278f4680d7553cff895/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2edd9d1acb24102f212d26d7e0444bb68d6af84c9748278f4680d7553cff895/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:52 np0005480824 podman[100960]: 2025-10-11 03:21:52.609102956 +0000 UTC m=+0.209777304 container init 9b9db525b6dcb8a081da935c7c4d5eb1b7c4f0217b2cd0ba7bca03c76ac40951 (image=quay.io/ceph/ceph:v18, name=relaxed_proskuriakova, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:52 np0005480824 podman[100960]: 2025-10-11 03:21:52.618673378 +0000 UTC m=+0.219347706 container start 9b9db525b6dcb8a081da935c7c4d5eb1b7c4f0217b2cd0ba7bca03c76ac40951 (image=quay.io/ceph/ceph:v18, name=relaxed_proskuriakova, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 10 23:21:52 np0005480824 podman[100960]: 2025-10-11 03:21:52.623405013 +0000 UTC m=+0.224079331 container attach 9b9db525b6dcb8a081da935c7c4d5eb1b7c4f0217b2cd0ba7bca03c76ac40951 (image=quay.io/ceph/ceph:v18, name=relaxed_proskuriakova, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:52 np0005480824 podman[101028]: 2025-10-11 03:21:52.824764423 +0000 UTC m=+0.051895251 container create 53426c3eb317616c2b70d50730a6965f375d00d0ca023da2460bbd9c696b45a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mds-cephfs-compute-0-uxaxgb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 10 23:21:52 np0005480824 podman[101028]: 2025-10-11 03:21:52.798428713 +0000 UTC m=+0.025559521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839842863fe9685fced44157e6b32dd25244cef8d08ff70db7eadf72355d8e35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839842863fe9685fced44157e6b32dd25244cef8d08ff70db7eadf72355d8e35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839842863fe9685fced44157e6b32dd25244cef8d08ff70db7eadf72355d8e35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839842863fe9685fced44157e6b32dd25244cef8d08ff70db7eadf72355d8e35/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.uxaxgb supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:52 np0005480824 ceph-mgr[74617]: [progress INFO root] Writing back 11 completed events
Oct 10 23:21:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 10 23:21:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:52 np0005480824 ceph-mgr[74617]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Oct 10 23:21:52 np0005480824 podman[101028]: 2025-10-11 03:21:52.931411682 +0000 UTC m=+0.158542490 container init 53426c3eb317616c2b70d50730a6965f375d00d0ca023da2460bbd9c696b45a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mds-cephfs-compute-0-uxaxgb, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:52 np0005480824 podman[101028]: 2025-10-11 03:21:52.936849164 +0000 UTC m=+0.163979952 container start 53426c3eb317616c2b70d50730a6965f375d00d0ca023da2460bbd9c696b45a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mds-cephfs-compute-0-uxaxgb, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:52 np0005480824 bash[101028]: 53426c3eb317616c2b70d50730a6965f375d00d0ca023da2460bbd9c696b45a6
Oct 10 23:21:52 np0005480824 systemd[1]: Started Ceph mds.cephfs.compute-0.uxaxgb for 92cfe4d4-4917-5be1-9d00-73758793a62b.
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: main not setting numa affinity
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: pidfile_write: ignore empty --pid-file
Oct 10 23:21:53 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mds-cephfs-compute-0-uxaxgb[101044]: starting mds.cephfs.compute-0.uxaxgb at 
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb Updating MDS map to version 2 from mon.0
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:53 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev 1ef6f802-5a8d-466b-8df5-37ae778888f0 (Updating mds.cephfs deployment (+1 -> 1))
Oct 10 23:21:53 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 1ef6f802-5a8d-466b-8df5-37ae778888f0 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:53 np0005480824 ansible-async_wrapper.py[99985]: Done in kid B.
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3299867884' entity='client.rgw.rgw.compute-0.bqunnq' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct 10 23:21:53 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 46 pg[8.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:53 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 23:21:53 np0005480824 relaxed_proskuriakova[100977]: 
Oct 10 23:21:53 np0005480824 relaxed_proskuriakova[100977]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct 10 23:21:53 np0005480824 systemd[1]: libpod-9b9db525b6dcb8a081da935c7c4d5eb1b7c4f0217b2cd0ba7bca03c76ac40951.scope: Deactivated successfully.
Oct 10 23:21:53 np0005480824 podman[100960]: 2025-10-11 03:21:53.179983797 +0000 UTC m=+0.780658085 container died 9b9db525b6dcb8a081da935c7c4d5eb1b7c4f0217b2cd0ba7bca03c76ac40951 (image=quay.io/ceph/ceph:v18, name=relaxed_proskuriakova, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:21:53 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c2edd9d1acb24102f212d26d7e0444bb68d6af84c9748278f4680d7553cff895-merged.mount: Deactivated successfully.
Oct 10 23:21:53 np0005480824 podman[100960]: 2025-10-11 03:21:53.230077013 +0000 UTC m=+0.830751311 container remove 9b9db525b6dcb8a081da935c7c4d5eb1b7c4f0217b2cd0ba7bca03c76ac40951 (image=quay.io/ceph/ceph:v18, name=relaxed_proskuriakova, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 23:21:53 np0005480824 systemd[1]: libpod-conmon-9b9db525b6dcb8a081da935c7c4d5eb1b7c4f0217b2cd0ba7bca03c76ac40951.scope: Deactivated successfully.
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3299867884' entity='client.rgw.rgw.compute-0.bqunnq' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.100:6814/19541639,v1:192.168.122.100:6815/19541639] as mds.0
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.uxaxgb assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).mds e3 new map
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-11T03:21:38.069496+0000#012modified#0112025-10-11T03:21:53.590653+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14267}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.uxaxgb{0:14267} state up:creating seq 1 addr [v2:192.168.122.100:6814/19541639,v1:192.168.122.100:6815/19541639] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb Updating MDS map to version 3 from mon.0
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.3 handle_mds_map i am now mds.0.3
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.cache creating system inode with ino:0x1
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.cache creating system inode with ino:0x100
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.cache creating system inode with ino:0x600
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.cache creating system inode with ino:0x601
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.cache creating system inode with ino:0x602
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.cache creating system inode with ino:0x603
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.cache creating system inode with ino:0x604
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/19541639,v1:192.168.122.100:6815/19541639] up:boot
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.cache creating system inode with ino:0x605
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.uxaxgb=up:creating}
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.cache creating system inode with ino:0x606
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.uxaxgb"} v 0) v1
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.cache creating system inode with ino:0x607
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.cache creating system inode with ino:0x608
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.uxaxgb"}]: dispatch
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.cache creating system inode with ino:0x609
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).mds e3 all = 0
Oct 10 23:21:53 np0005480824 ceph-mds[101067]: mds.0.3 creating_done
Oct 10 23:21:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.uxaxgb is now active in filesystem cephfs as rank 0
Oct 10 23:21:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v110: 194 pgs: 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:54 np0005480824 podman[101351]: 2025-10-11 03:21:54.068787258 +0000 UTC m=+0.096547826 container exec a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct 10 23:21:54 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 47 pg[9.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3299867884' entity='client.rgw.rgw.compute-0.bqunnq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 23:21:54 np0005480824 python3[101357]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:54 np0005480824 podman[101351]: 2025-10-11 03:21:54.164639245 +0000 UTC m=+0.192399753 container exec_died a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:54 np0005480824 podman[101373]: 2025-10-11 03:21:54.21139687 +0000 UTC m=+0.053625893 container create 455bd4f928c4060219b6de7334330fb26f17e9b38171288b14935ab0009d8961 (image=quay.io/ceph/ceph:v18, name=brave_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 23:21:54 np0005480824 systemd[1]: Started libpod-conmon-455bd4f928c4060219b6de7334330fb26f17e9b38171288b14935ab0009d8961.scope.
Oct 10 23:21:54 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786c4948f50111995c55be37c6404323c4004d38b76a6beb64270ce5de50ed58/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786c4948f50111995c55be37c6404323c4004d38b76a6beb64270ce5de50ed58/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:54 np0005480824 podman[101373]: 2025-10-11 03:21:54.28552799 +0000 UTC m=+0.127757043 container init 455bd4f928c4060219b6de7334330fb26f17e9b38171288b14935ab0009d8961 (image=quay.io/ceph/ceph:v18, name=brave_goldberg, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:21:54 np0005480824 podman[101373]: 2025-10-11 03:21:54.193550317 +0000 UTC m=+0.035779360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:54 np0005480824 podman[101373]: 2025-10-11 03:21:54.29293051 +0000 UTC m=+0.135159533 container start 455bd4f928c4060219b6de7334330fb26f17e9b38171288b14935ab0009d8961 (image=quay.io/ceph/ceph:v18, name=brave_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 10 23:21:54 np0005480824 podman[101373]: 2025-10-11 03:21:54.296069906 +0000 UTC m=+0.138298929 container attach 455bd4f928c4060219b6de7334330fb26f17e9b38171288b14935ab0009d8961 (image=quay.io/ceph/ceph:v18, name=brave_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: daemon mds.cephfs.compute-0.uxaxgb assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: Cluster is now healthy
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: daemon mds.cephfs.compute-0.uxaxgb is now active in filesystem cephfs as rank 0
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3299867884' entity='client.rgw.rgw.compute-0.bqunnq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 23:21:54 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts
Oct 10 23:21:54 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).mds e4 new map
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-11T03:21:38.069496+0000#012modified#0112025-10-11T03:21:54.596179+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14267}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.uxaxgb{0:14267} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/19541639,v1:192.168.122.100:6815/19541639] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct 10 23:21:54 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb Updating MDS map to version 4 from mon.0
Oct 10 23:21:54 np0005480824 ceph-mds[101067]: mds.0.3 handle_mds_map i am now mds.0.3
Oct 10 23:21:54 np0005480824 ceph-mds[101067]: mds.0.3 handle_mds_map state change up:creating --> up:active
Oct 10 23:21:54 np0005480824 ceph-mds[101067]: mds.0.3 recovery_done -- successful recovery!
Oct 10 23:21:54 np0005480824 ceph-mds[101067]: mds.0.3 active_start
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/19541639,v1:192.168.122.100:6815/19541639] up:active
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.uxaxgb=up:active}
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:21:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:54 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 23:21:54 np0005480824 brave_goldberg[101408]: 
Oct 10 23:21:54 np0005480824 brave_goldberg[101408]: [{"container_id": "90b5b5a03190", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.39%", "created": "2025-10-11T03:19:55.580192Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-11T03:19:55.773101Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T03:21:54.764478Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2025-10-11T03:19:55.363878Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-92cfe4d4-4917-5be1-9d00-73758793a62b@crash.compute-0", "version": "18.2.7"}, {"container_id": "53426c3eb317", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "12.85%", "created": "2025-10-11T03:21:52.949890Z", "daemon_id": "cephfs.compute-0.uxaxgb", "daemon_name": "mds.cephfs.compute-0.uxaxgb", "daemon_type": "mds", "events": ["2025-10-11T03:21:53.023184Z daemon:mds.cephfs.compute-0.uxaxgb [INFO] \"Deployed mds.cephfs.compute-0.uxaxgb on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T03:21:54.764927Z", "memory_usage": 15938355, "ports": [], "service_name": "mds.cephfs", "started": "2025-10-11T03:21:52.803862Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-92cfe4d4-4917-5be1-9d00-73758793a62b@mds.cephfs.compute-0.uxaxgb", "version": "18.2.7"}, {"container_id": "5396d33f03d7", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "24.03%", "created": "2025-10-11T03:18:40.727295Z", "daemon_id": "compute-0.pdyrua", "daemon_name": "mgr.compute-0.pdyrua", "daemon_type": "mgr", "events": ["2025-10-11T03:20:01.427878Z daemon:mgr.compute-0.pdyrua [INFO] \"Reconfigured mgr.compute-0.pdyrua on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T03:21:54.764393Z", "memory_usage": 552704409, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-11T03:18:40.598366Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-92cfe4d4-4917-5be1-9d00-73758793a62b@mgr.compute-0.pdyrua", "version": "18.2.7"}, {"container_id": "a848fe58749d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.13%", "created": "2025-10-11T03:18:35.533167Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-11T03:20:00.581488Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T03:21:54.764288Z", "memory_request": 2147483648, "memory_usage": 41135636, "ports": [], "service_name": "mon", "started": "2025-10-11T03:18:38.283701Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-92cfe4d4-4917-5be1-9d00-73758793a62b@mon.compute-0", "version": "18.2.7"}, {"container_id": "47f64e87e587", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.46%", "created": "2025-10-11T03:20:28.483961Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-11T03:20:28.857202Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T03:21:54.764562Z", "memory_request": 4294967296, "memory_usage": 66815262, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-11T03:20:28.284372Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-92cfe4d4-4917-5be1-9d00-73758793a62b@osd.0", "version": "18.2.7"}, {"container_id": "159562a3a150", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.62%", "created": "2025-10-11T03:20:33.908088Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-11T03:20:34.028707Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T03:21:54.764678Z", "memory_request": 4294967296, "memory_usage": 66930606, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-11T03:20:33.727557Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-92cfe4d4-4917-5be1-9d00-73758793a62b@osd.1", "version": "18.2.7"}, {"container_id": "1ea030e74696", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.76%", "created": "2025-10-11T03:20:39.035208Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-11T03:20:39.169191Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-11T03:21:54.764761Z", "memory_request": 4294967296, "memory_usage": 66332917, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-11T03:20:38.872245Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-92cfe4d4-4917-5be1-9d00-73758793a62b@osd.2", "version": "18.2.7"}, {"container_id": "25ff1f079ca6", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "4.06%", "created": "2025-10-11T03:21:50.994844Z", "daemon_id": "rgw.compute-0.bqunnq", "daemon_name": "rgw.rgw.compute-0.bqunnq", "daemon_type": "rgw", "events": ["2025-10-11
Oct 10 23:21:54 np0005480824 systemd[1]: libpod-455bd4f928c4060219b6de7334330fb26f17e9b38171288b14935ab0009d8961.scope: Deactivated successfully.
Oct 10 23:21:54 np0005480824 podman[101373]: 2025-10-11 03:21:54.824900746 +0000 UTC m=+0.667129779 container died 455bd4f928c4060219b6de7334330fb26f17e9b38171288b14935ab0009d8961 (image=quay.io/ceph/ceph:v18, name=brave_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:54 np0005480824 systemd[1]: var-lib-containers-storage-overlay-786c4948f50111995c55be37c6404323c4004d38b76a6beb64270ce5de50ed58-merged.mount: Deactivated successfully.
Oct 10 23:21:54 np0005480824 podman[101373]: 2025-10-11 03:21:54.877854842 +0000 UTC m=+0.720083875 container remove 455bd4f928c4060219b6de7334330fb26f17e9b38171288b14935ab0009d8961 (image=quay.io/ceph/ceph:v18, name=brave_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:54 np0005480824 rsyslogd[1004]: message too long (8589) with configured size 8096, begin of message is: [{"container_id": "90b5b5a03190", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 10 23:21:54 np0005480824 systemd[1]: libpod-conmon-455bd4f928c4060219b6de7334330fb26f17e9b38171288b14935ab0009d8961.scope: Deactivated successfully.
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3299867884' entity='client.rgw.rgw.compute-0.bqunnq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct 10 23:21:55 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 48 pg[9.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:55 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct 10 23:21:55 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct 10 23:21:55 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Oct 10 23:21:55 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:55 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 2efe2953-eefd-43fa-ad54-5447b048e7a3 does not exist
Oct 10 23:21:55 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev db5a9c81-402d-4db5-9f22-6b048768457e does not exist
Oct 10 23:21:55 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 78530e2b-6aa7-4326-a0e4-8161724fc74e does not exist
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3299867884' entity='client.rgw.rgw.compute-0.bqunnq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:55 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:21:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v113: 195 pgs: 1 unknown, 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:21:55 np0005480824 python3[101773]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:55 np0005480824 podman[101819]: 2025-10-11 03:21:55.91153846 +0000 UTC m=+0.040886433 container create bb504363a3c3cd762174743d3d3f641e18a0f0bc64f755a138a7c1bf3b2b3b84 (image=quay.io/ceph/ceph:v18, name=nifty_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 23:21:55 np0005480824 systemd[1]: Started libpod-conmon-bb504363a3c3cd762174743d3d3f641e18a0f0bc64f755a138a7c1bf3b2b3b84.scope.
Oct 10 23:21:55 np0005480824 podman[101819]: 2025-10-11 03:21:55.893325578 +0000 UTC m=+0.022673651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:55 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:55 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eecb226f11cfd57f131624f3ccee37154f5966b96199c49a8ab080cbe3947833/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:55 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eecb226f11cfd57f131624f3ccee37154f5966b96199c49a8ab080cbe3947833/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:56 np0005480824 podman[101819]: 2025-10-11 03:21:56.007442369 +0000 UTC m=+0.136790362 container init bb504363a3c3cd762174743d3d3f641e18a0f0bc64f755a138a7c1bf3b2b3b84 (image=quay.io/ceph/ceph:v18, name=nifty_hertz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 23:21:56 np0005480824 podman[101819]: 2025-10-11 03:21:56.019366068 +0000 UTC m=+0.148714051 container start bb504363a3c3cd762174743d3d3f641e18a0f0bc64f755a138a7c1bf3b2b3b84 (image=quay.io/ceph/ceph:v18, name=nifty_hertz, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 10 23:21:56 np0005480824 podman[101819]: 2025-10-11 03:21:56.022490014 +0000 UTC m=+0.151837977 container attach bb504363a3c3cd762174743d3d3f641e18a0f0bc64f755a138a7c1bf3b2b3b84 (image=quay.io/ceph/ceph:v18, name=nifty_hertz, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 10 23:21:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct 10 23:21:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct 10 23:21:56 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct 10 23:21:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 10 23:21:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3299867884' entity='client.rgw.rgw.compute-0.bqunnq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 23:21:56 np0005480824 podman[101877]: 2025-10-11 03:21:56.218538534 +0000 UTC m=+0.054579886 container create f948cca77795efbccee92912c784bebac554e3bb218b0ac7394c649164f2fc13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:56 np0005480824 systemd[1]: Started libpod-conmon-f948cca77795efbccee92912c784bebac554e3bb218b0ac7394c649164f2fc13.scope.
Oct 10 23:21:56 np0005480824 podman[101877]: 2025-10-11 03:21:56.192292707 +0000 UTC m=+0.028334069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:56 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:56 np0005480824 podman[101877]: 2025-10-11 03:21:56.303722182 +0000 UTC m=+0.139763564 container init f948cca77795efbccee92912c784bebac554e3bb218b0ac7394c649164f2fc13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 10 23:21:56 np0005480824 podman[101877]: 2025-10-11 03:21:56.316637737 +0000 UTC m=+0.152679089 container start f948cca77795efbccee92912c784bebac554e3bb218b0ac7394c649164f2fc13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 23:21:56 np0005480824 naughty_lamarr[101894]: 167 167
Oct 10 23:21:56 np0005480824 systemd[1]: libpod-f948cca77795efbccee92912c784bebac554e3bb218b0ac7394c649164f2fc13.scope: Deactivated successfully.
Oct 10 23:21:56 np0005480824 podman[101877]: 2025-10-11 03:21:56.321099755 +0000 UTC m=+0.157141117 container attach f948cca77795efbccee92912c784bebac554e3bb218b0ac7394c649164f2fc13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 10 23:21:56 np0005480824 podman[101877]: 2025-10-11 03:21:56.321654818 +0000 UTC m=+0.157696230 container died f948cca77795efbccee92912c784bebac554e3bb218b0ac7394c649164f2fc13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:21:56 np0005480824 systemd[1]: var-lib-containers-storage-overlay-8d053449b9ac0ad07101a142d9dc1496f2bf271c13dae445891c4901f53d2050-merged.mount: Deactivated successfully.
Oct 10 23:21:56 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 49 pg[10.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [2] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:56 np0005480824 podman[101877]: 2025-10-11 03:21:56.378821306 +0000 UTC m=+0.214862638 container remove f948cca77795efbccee92912c784bebac554e3bb218b0ac7394c649164f2fc13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:56 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct 10 23:21:56 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct 10 23:21:56 np0005480824 systemd[1]: libpod-conmon-f948cca77795efbccee92912c784bebac554e3bb218b0ac7394c649164f2fc13.scope: Deactivated successfully.
Oct 10 23:21:56 np0005480824 podman[101936]: 2025-10-11 03:21:56.557453744 +0000 UTC m=+0.058425670 container create 5a8894e16e2294f383feb507b5bb2b4a25f65613b8e52de3989d611626c75c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 10 23:21:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4010413653' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 23:21:56 np0005480824 nifty_hertz[101834]: 
Oct 10 23:21:56 np0005480824 nifty_hertz[101834]: {"fsid":"92cfe4d4-4917-5be1-9d00-73758793a62b","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":198,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":49,"num_osds":3,"num_up_osds":3,"osd_up_since":1760152846,"num_in_osds":3,"osd_in_since":1760152815,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193},{"state_name":"creating+peering","count":1}],"num_pgs":194,"num_pools":8,"num_objects":2,"data_bytes":459280,"bytes_used":84230144,"bytes_avail":64327696384,"bytes_total":64411926528,"inactive_pgs_ratio":0.0051546390168368816},"fsmap":{"epoch":4,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.uxaxgb","status":"up:active","gid":14267}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-10-11T03:21:47.810283+0000","services":{}},"progress_events":{"8cade903-4414-4c04-9e54-e0c630f97913":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct 10 23:21:56 np0005480824 systemd[1]: Started libpod-conmon-5a8894e16e2294f383feb507b5bb2b4a25f65613b8e52de3989d611626c75c10.scope.
Oct 10 23:21:56 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 10 23:21:56 np0005480824 systemd[1]: libpod-bb504363a3c3cd762174743d3d3f641e18a0f0bc64f755a138a7c1bf3b2b3b84.scope: Deactivated successfully.
Oct 10 23:21:56 np0005480824 conmon[101834]: conmon bb504363a3c3cd762174 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb504363a3c3cd762174743d3d3f641e18a0f0bc64f755a138a7c1bf3b2b3b84.scope/container/memory.events
Oct 10 23:21:56 np0005480824 podman[101819]: 2025-10-11 03:21:56.613764141 +0000 UTC m=+0.743112144 container died bb504363a3c3cd762174743d3d3f641e18a0f0bc64f755a138a7c1bf3b2b3b84 (image=quay.io/ceph/ceph:v18, name=nifty_hertz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 10 23:21:56 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3299867884' entity='client.rgw.rgw.compute-0.bqunnq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 23:21:56 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 10 23:21:56 np0005480824 podman[101936]: 2025-10-11 03:21:56.537047058 +0000 UTC m=+0.038018994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:56 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:56 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714d34514b912e5ae26356724799dedd88be5c2d8fe6869ac004d3beefbc0278/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:56 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714d34514b912e5ae26356724799dedd88be5c2d8fe6869ac004d3beefbc0278/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:56 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714d34514b912e5ae26356724799dedd88be5c2d8fe6869ac004d3beefbc0278/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:56 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714d34514b912e5ae26356724799dedd88be5c2d8fe6869ac004d3beefbc0278/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:56 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714d34514b912e5ae26356724799dedd88be5c2d8fe6869ac004d3beefbc0278/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:56 np0005480824 systemd[1]: var-lib-containers-storage-overlay-eecb226f11cfd57f131624f3ccee37154f5966b96199c49a8ab080cbe3947833-merged.mount: Deactivated successfully.
Oct 10 23:21:56 np0005480824 podman[101936]: 2025-10-11 03:21:56.670028897 +0000 UTC m=+0.171000873 container init 5a8894e16e2294f383feb507b5bb2b4a25f65613b8e52de3989d611626c75c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jepsen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:56 np0005480824 podman[101936]: 2025-10-11 03:21:56.679073936 +0000 UTC m=+0.180045832 container start 5a8894e16e2294f383feb507b5bb2b4a25f65613b8e52de3989d611626c75c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 10 23:21:56 np0005480824 podman[101936]: 2025-10-11 03:21:56.683683739 +0000 UTC m=+0.184655715 container attach 5a8894e16e2294f383feb507b5bb2b4a25f65613b8e52de3989d611626c75c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jepsen, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:56 np0005480824 podman[101819]: 2025-10-11 03:21:56.690710509 +0000 UTC m=+0.820058492 container remove bb504363a3c3cd762174743d3d3f641e18a0f0bc64f755a138a7c1bf3b2b3b84 (image=quay.io/ceph/ceph:v18, name=nifty_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:21:56 np0005480824 systemd[1]: libpod-conmon-bb504363a3c3cd762174743d3d3f641e18a0f0bc64f755a138a7c1bf3b2b3b84.scope: Deactivated successfully.
Oct 10 23:21:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct 10 23:21:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3299867884' entity='client.rgw.rgw.compute-0.bqunnq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 10 23:21:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct 10 23:21:57 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct 10 23:21:57 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 50 pg[10.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [2] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:57 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/3299867884' entity='client.rgw.rgw.compute-0.bqunnq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 10 23:21:57 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct 10 23:21:57 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct 10 23:21:57 np0005480824 python3[102026]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:57 np0005480824 amazing_jepsen[101954]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:21:57 np0005480824 amazing_jepsen[101954]: --> relative data size: 1.0
Oct 10 23:21:57 np0005480824 amazing_jepsen[101954]: --> All data devices are unavailable
Oct 10 23:21:57 np0005480824 systemd[1]: libpod-5a8894e16e2294f383feb507b5bb2b4a25f65613b8e52de3989d611626c75c10.scope: Deactivated successfully.
Oct 10 23:21:57 np0005480824 systemd[1]: libpod-5a8894e16e2294f383feb507b5bb2b4a25f65613b8e52de3989d611626c75c10.scope: Consumed 1.052s CPU time.
Oct 10 23:21:57 np0005480824 podman[101936]: 2025-10-11 03:21:57.789232871 +0000 UTC m=+1.290204767 container died 5a8894e16e2294f383feb507b5bb2b4a25f65613b8e52de3989d611626c75c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 23:21:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v116: 196 pgs: 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 10 23:21:57 np0005480824 systemd[1]: var-lib-containers-storage-overlay-714d34514b912e5ae26356724799dedd88be5c2d8fe6869ac004d3beefbc0278-merged.mount: Deactivated successfully.
Oct 10 23:21:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:21:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:21:57 np0005480824 podman[101936]: 2025-10-11 03:21:57.853548433 +0000 UTC m=+1.354520319 container remove 5a8894e16e2294f383feb507b5bb2b4a25f65613b8e52de3989d611626c75c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 10 23:21:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:21:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:21:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:21:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:21:57 np0005480824 podman[102034]: 2025-10-11 03:21:57.863823823 +0000 UTC m=+0.104520809 container create e3667e12c8dea539ba8907665651a48526b1cb2c5972f24aec6344de1dd46d1c (image=quay.io/ceph/ceph:v18, name=magical_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:21:57 np0005480824 systemd[1]: libpod-conmon-5a8894e16e2294f383feb507b5bb2b4a25f65613b8e52de3989d611626c75c10.scope: Deactivated successfully.
Oct 10 23:21:57 np0005480824 podman[102034]: 2025-10-11 03:21:57.81222475 +0000 UTC m=+0.052921806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:57 np0005480824 systemd[1]: Started libpod-conmon-e3667e12c8dea539ba8907665651a48526b1cb2c5972f24aec6344de1dd46d1c.scope.
Oct 10 23:21:57 np0005480824 ceph-mgr[74617]: [progress INFO root] Writing back 12 completed events
Oct 10 23:21:57 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 10 23:21:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e843ae24d057dc1b1620f50ae30147d188c559bfa3fab9c4a098f9b13d4945/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e843ae24d057dc1b1620f50ae30147d188c559bfa3fab9c4a098f9b13d4945/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:57 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 8cade903-4414-4c04-9e54-e0c630f97913 (Global Recovery Event) in 5 seconds
Oct 10 23:21:57 np0005480824 podman[102034]: 2025-10-11 03:21:57.96257633 +0000 UTC m=+0.203273346 container init e3667e12c8dea539ba8907665651a48526b1cb2c5972f24aec6344de1dd46d1c (image=quay.io/ceph/ceph:v18, name=magical_ritchie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:57 np0005480824 podman[102034]: 2025-10-11 03:21:57.972192774 +0000 UTC m=+0.212889750 container start e3667e12c8dea539ba8907665651a48526b1cb2c5972f24aec6344de1dd46d1c (image=quay.io/ceph/ceph:v18, name=magical_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 10 23:21:57 np0005480824 podman[102034]: 2025-10-11 03:21:57.975216447 +0000 UTC m=+0.215913463 container attach e3667e12c8dea539ba8907665651a48526b1cb2c5972f24aec6344de1dd46d1c (image=quay.io/ceph/ceph:v18, name=magical_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:21:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct 10 23:21:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct 10 23:21:58 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct 10 23:21:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 10 23:21:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1259390569' entity='client.rgw.rgw.compute-0.bqunnq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 23:21:58 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct 10 23:21:58 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct 10 23:21:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 10 23:21:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890743328' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 23:21:58 np0005480824 magical_ritchie[102065]: 
Oct 10 23:21:58 np0005480824 systemd[1]: libpod-e3667e12c8dea539ba8907665651a48526b1cb2c5972f24aec6344de1dd46d1c.scope: Deactivated successfully.
Oct 10 23:21:58 np0005480824 magical_ritchie[102065]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.bqunnq","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct 10 23:21:58 np0005480824 podman[102034]: 2025-10-11 03:21:58.554241457 +0000 UTC m=+0.794938423 container died e3667e12c8dea539ba8907665651a48526b1cb2c5972f24aec6344de1dd46d1c (image=quay.io/ceph/ceph:v18, name=magical_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 23:21:58 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e9e843ae24d057dc1b1620f50ae30147d188c559bfa3fab9c4a098f9b13d4945-merged.mount: Deactivated successfully.
Oct 10 23:21:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:21:58 np0005480824 podman[102034]: 2025-10-11 03:21:58.608671468 +0000 UTC m=+0.849368444 container remove e3667e12c8dea539ba8907665651a48526b1cb2c5972f24aec6344de1dd46d1c (image=quay.io/ceph/ceph:v18, name=magical_ritchie, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:58 np0005480824 systemd[1]: libpod-conmon-e3667e12c8dea539ba8907665651a48526b1cb2c5972f24aec6344de1dd46d1c.scope: Deactivated successfully.
Oct 10 23:21:58 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:21:58 np0005480824 podman[102242]: 2025-10-11 03:21:58.671444982 +0000 UTC m=+0.035657537 container create 950237c8c97d0782610c21aadc6c9d1955185ef7de4a96958a0d52fa7170fd22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_vaughan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 23:21:58 np0005480824 systemd[1]: Started libpod-conmon-950237c8c97d0782610c21aadc6c9d1955185ef7de4a96958a0d52fa7170fd22.scope.
Oct 10 23:21:58 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:58 np0005480824 podman[102242]: 2025-10-11 03:21:58.732828262 +0000 UTC m=+0.097040867 container init 950237c8c97d0782610c21aadc6c9d1955185ef7de4a96958a0d52fa7170fd22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_vaughan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:58 np0005480824 podman[102242]: 2025-10-11 03:21:58.743642445 +0000 UTC m=+0.107855000 container start 950237c8c97d0782610c21aadc6c9d1955185ef7de4a96958a0d52fa7170fd22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 10 23:21:58 np0005480824 confident_vaughan[102258]: 167 167
Oct 10 23:21:58 np0005480824 systemd[1]: libpod-950237c8c97d0782610c21aadc6c9d1955185ef7de4a96958a0d52fa7170fd22.scope: Deactivated successfully.
Oct 10 23:21:58 np0005480824 podman[102242]: 2025-10-11 03:21:58.748676728 +0000 UTC m=+0.112889303 container attach 950237c8c97d0782610c21aadc6c9d1955185ef7de4a96958a0d52fa7170fd22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_vaughan, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:58 np0005480824 podman[102242]: 2025-10-11 03:21:58.749086977 +0000 UTC m=+0.113299522 container died 950237c8c97d0782610c21aadc6c9d1955185ef7de4a96958a0d52fa7170fd22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 23:21:58 np0005480824 podman[102242]: 2025-10-11 03:21:58.656884198 +0000 UTC m=+0.021096783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:58 np0005480824 systemd[1]: var-lib-containers-storage-overlay-200254f8a4cb3af73e0dc680eed4971833e5c27ae1ae69bc5f95535a19c5afc5-merged.mount: Deactivated successfully.
Oct 10 23:21:58 np0005480824 podman[102242]: 2025-10-11 03:21:58.782371816 +0000 UTC m=+0.146584371 container remove 950237c8c97d0782610c21aadc6c9d1955185ef7de4a96958a0d52fa7170fd22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 23:21:58 np0005480824 systemd[1]: libpod-conmon-950237c8c97d0782610c21aadc6c9d1955185ef7de4a96958a0d52fa7170fd22.scope: Deactivated successfully.
Oct 10 23:21:58 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:21:58 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1259390569' entity='client.rgw.rgw.compute-0.bqunnq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 23:21:58 np0005480824 podman[102283]: 2025-10-11 03:21:58.974780527 +0000 UTC m=+0.060705325 container create 0d860af79c8fd3d881bfc425751ac67f88a7b103302fb9271ddeec9c46d8bcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 10 23:21:59 np0005480824 systemd[1]: Started libpod-conmon-0d860af79c8fd3d881bfc425751ac67f88a7b103302fb9271ddeec9c46d8bcbc.scope.
Oct 10 23:21:59 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:59 np0005480824 podman[102283]: 2025-10-11 03:21:58.956875272 +0000 UTC m=+0.042800110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:21:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d3080137f496b6fa1b8880b82b43a5bf1d4ec966bd805cb7ee83a5d7148156/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d3080137f496b6fa1b8880b82b43a5bf1d4ec966bd805cb7ee83a5d7148156/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d3080137f496b6fa1b8880b82b43a5bf1d4ec966bd805cb7ee83a5d7148156/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d3080137f496b6fa1b8880b82b43a5bf1d4ec966bd805cb7ee83a5d7148156/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:59 np0005480824 podman[102283]: 2025-10-11 03:21:59.071414463 +0000 UTC m=+0.157339321 container init 0d860af79c8fd3d881bfc425751ac67f88a7b103302fb9271ddeec9c46d8bcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dubinsky, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:21:59 np0005480824 podman[102283]: 2025-10-11 03:21:59.083573069 +0000 UTC m=+0.169497877 container start 0d860af79c8fd3d881bfc425751ac67f88a7b103302fb9271ddeec9c46d8bcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dubinsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 23:21:59 np0005480824 podman[102283]: 2025-10-11 03:21:59.087440292 +0000 UTC m=+0.173365160 container attach 0d860af79c8fd3d881bfc425751ac67f88a7b103302fb9271ddeec9c46d8bcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:21:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct 10 23:21:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1259390569' entity='client.rgw.rgw.compute-0.bqunnq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 10 23:21:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct 10 23:21:59 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct 10 23:21:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 10 23:21:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1259390569' entity='client.rgw.rgw.compute-0.bqunnq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 23:21:59 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:21:59 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct 10 23:21:59 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct 10 23:21:59 np0005480824 python3[102330]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:21:59 np0005480824 podman[102331]: 2025-10-11 03:21:59.741552205 +0000 UTC m=+0.052798113 container create 12a1b32753beaa546d6831ce3896adfa278b7a7b2f41c7f728d29d228bff4891 (image=quay.io/ceph/ceph:v18, name=keen_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:21:59 np0005480824 systemd[1]: Started libpod-conmon-12a1b32753beaa546d6831ce3896adfa278b7a7b2f41c7f728d29d228bff4891.scope.
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]: {
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:    "0": [
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:        {
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "devices": [
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "/dev/loop3"
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            ],
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_name": "ceph_lv0",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_size": "21470642176",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "name": "ceph_lv0",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "tags": {
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.cluster_name": "ceph",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.crush_device_class": "",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.encrypted": "0",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.osd_id": "0",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.type": "block",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.vdo": "0"
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            },
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "type": "block",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "vg_name": "ceph_vg0"
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:        }
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:    ],
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:    "1": [
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:        {
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "devices": [
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "/dev/loop4"
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            ],
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_name": "ceph_lv1",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_size": "21470642176",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "name": "ceph_lv1",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "tags": {
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.cluster_name": "ceph",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.crush_device_class": "",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.encrypted": "0",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.osd_id": "1",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.type": "block",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.vdo": "0"
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            },
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "type": "block",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "vg_name": "ceph_vg1"
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:        }
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:    ],
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:    "2": [
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:        {
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "devices": [
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "/dev/loop5"
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            ],
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_name": "ceph_lv2",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_size": "21470642176",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "name": "ceph_lv2",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "tags": {
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.cluster_name": "ceph",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.crush_device_class": "",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.encrypted": "0",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.osd_id": "2",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.type": "block",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:                "ceph.vdo": "0"
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            },
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "type": "block",
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:            "vg_name": "ceph_vg2"
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:        }
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]:    ]
Oct 10 23:21:59 np0005480824 crazy_dubinsky[102300]: }
Oct 10 23:21:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v119: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 10 23:21:59 np0005480824 podman[102283]: 2025-10-11 03:21:59.819916618 +0000 UTC m=+0.905841456 container died 0d860af79c8fd3d881bfc425751ac67f88a7b103302fb9271ddeec9c46d8bcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dubinsky, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:21:59 np0005480824 podman[102331]: 2025-10-11 03:21:59.726561051 +0000 UTC m=+0.037806979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:21:59 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:21:59 np0005480824 systemd[1]: libpod-0d860af79c8fd3d881bfc425751ac67f88a7b103302fb9271ddeec9c46d8bcbc.scope: Deactivated successfully.
Oct 10 23:21:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46bcce72e9a396f3247f8bf1b57fb830dbb6acca35ba4254f8a948b814f6674/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46bcce72e9a396f3247f8bf1b57fb830dbb6acca35ba4254f8a948b814f6674/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:21:59 np0005480824 podman[102331]: 2025-10-11 03:21:59.842757862 +0000 UTC m=+0.154003770 container init 12a1b32753beaa546d6831ce3896adfa278b7a7b2f41c7f728d29d228bff4891 (image=quay.io/ceph/ceph:v18, name=keen_wilson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:21:59 np0005480824 podman[102331]: 2025-10-11 03:21:59.855244815 +0000 UTC m=+0.166490723 container start 12a1b32753beaa546d6831ce3896adfa278b7a7b2f41c7f728d29d228bff4891 (image=quay.io/ceph/ceph:v18, name=keen_wilson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 23:21:59 np0005480824 systemd[1]: var-lib-containers-storage-overlay-69d3080137f496b6fa1b8880b82b43a5bf1d4ec966bd805cb7ee83a5d7148156-merged.mount: Deactivated successfully.
Oct 10 23:21:59 np0005480824 podman[102331]: 2025-10-11 03:21:59.861438865 +0000 UTC m=+0.172684813 container attach 12a1b32753beaa546d6831ce3896adfa278b7a7b2f41c7f728d29d228bff4891 (image=quay.io/ceph/ceph:v18, name=keen_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 10 23:21:59 np0005480824 podman[102283]: 2025-10-11 03:21:59.890855799 +0000 UTC m=+0.976780597 container remove 0d860af79c8fd3d881bfc425751ac67f88a7b103302fb9271ddeec9c46d8bcbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 10 23:21:59 np0005480824 systemd[1]: libpod-conmon-0d860af79c8fd3d881bfc425751ac67f88a7b103302fb9271ddeec9c46d8bcbc.scope: Deactivated successfully.
Oct 10 23:22:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct 10 23:22:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1259390569' entity='client.rgw.rgw.compute-0.bqunnq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 10 23:22:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct 10 23:22:00 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct 10 23:22:00 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1259390569' entity='client.rgw.rgw.compute-0.bqunnq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 10 23:22:00 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1259390569' entity='client.rgw.rgw.compute-0.bqunnq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 23:22:00 np0005480824 radosgw[100589]: LDAP not started since no server URIs were provided in the configuration.
Oct 10 23:22:00 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-rgw-rgw-compute-0-bqunnq[100585]: 2025-10-11T03:22:00.279+0000 7f1d9498f940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 10 23:22:00 np0005480824 radosgw[100589]: framework: beast
Oct 10 23:22:00 np0005480824 radosgw[100589]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 10 23:22:00 np0005480824 radosgw[100589]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 10 23:22:00 np0005480824 radosgw[100589]: starting handler: beast
Oct 10 23:22:00 np0005480824 radosgw[100589]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 23:22:00 np0005480824 radosgw[100589]: mgrc service_daemon_register rgw.14273 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.bqunnq,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864356,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=d1b20eea-863a-4642-a5b9-414ad62d01d2,zone_name=default,zonegroup_id=2b23d063-6bef-4b29-8e72-4dac4ad3045b,zonegroup_name=default}
Oct 10 23:22:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct 10 23:22:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2099811995' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 10 23:22:00 np0005480824 keen_wilson[102350]: mimic
Oct 10 23:22:00 np0005480824 podman[102331]: 2025-10-11 03:22:00.433126407 +0000 UTC m=+0.744372315 container died 12a1b32753beaa546d6831ce3896adfa278b7a7b2f41c7f728d29d228bff4891 (image=quay.io/ceph/ceph:v18, name=keen_wilson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:22:00 np0005480824 systemd[1]: libpod-12a1b32753beaa546d6831ce3896adfa278b7a7b2f41c7f728d29d228bff4891.scope: Deactivated successfully.
Oct 10 23:22:00 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a46bcce72e9a396f3247f8bf1b57fb830dbb6acca35ba4254f8a948b814f6674-merged.mount: Deactivated successfully.
Oct 10 23:22:00 np0005480824 podman[102331]: 2025-10-11 03:22:00.515535807 +0000 UTC m=+0.826781715 container remove 12a1b32753beaa546d6831ce3896adfa278b7a7b2f41c7f728d29d228bff4891 (image=quay.io/ceph/ceph:v18, name=keen_wilson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:22:00 np0005480824 systemd[1]: libpod-conmon-12a1b32753beaa546d6831ce3896adfa278b7a7b2f41c7f728d29d228bff4891.scope: Deactivated successfully.
Oct 10 23:22:00 np0005480824 podman[103077]: 2025-10-11 03:22:00.63717363 +0000 UTC m=+0.038623838 container create 6cae86fdd68ecd858c1e9444a1648cc3d76f65e3c018d056990536c3add2028b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_golick, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:22:00 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct 10 23:22:00 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct 10 23:22:00 np0005480824 systemd[1]: Started libpod-conmon-6cae86fdd68ecd858c1e9444a1648cc3d76f65e3c018d056990536c3add2028b.scope.
Oct 10 23:22:00 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:22:00 np0005480824 podman[103077]: 2025-10-11 03:22:00.620076845 +0000 UTC m=+0.021527063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:22:00 np0005480824 podman[103077]: 2025-10-11 03:22:00.722243556 +0000 UTC m=+0.123693774 container init 6cae86fdd68ecd858c1e9444a1648cc3d76f65e3c018d056990536c3add2028b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_golick, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 23:22:00 np0005480824 podman[103077]: 2025-10-11 03:22:00.727849012 +0000 UTC m=+0.129299210 container start 6cae86fdd68ecd858c1e9444a1648cc3d76f65e3c018d056990536c3add2028b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_golick, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:22:00 np0005480824 priceless_golick[103094]: 167 167
Oct 10 23:22:00 np0005480824 systemd[1]: libpod-6cae86fdd68ecd858c1e9444a1648cc3d76f65e3c018d056990536c3add2028b.scope: Deactivated successfully.
Oct 10 23:22:00 np0005480824 podman[103077]: 2025-10-11 03:22:00.73188477 +0000 UTC m=+0.133335008 container attach 6cae86fdd68ecd858c1e9444a1648cc3d76f65e3c018d056990536c3add2028b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_golick, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 23:22:00 np0005480824 podman[103077]: 2025-10-11 03:22:00.732723931 +0000 UTC m=+0.134174199 container died 6cae86fdd68ecd858c1e9444a1648cc3d76f65e3c018d056990536c3add2028b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_golick, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 10 23:22:00 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3adb0bff32cd95c5c72d65a76a8fa157d817d6cf28cf6ab5e59a09ac6a125df6-merged.mount: Deactivated successfully.
Oct 10 23:22:00 np0005480824 podman[103077]: 2025-10-11 03:22:00.78212843 +0000 UTC m=+0.183578628 container remove 6cae86fdd68ecd858c1e9444a1648cc3d76f65e3c018d056990536c3add2028b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_golick, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:22:00 np0005480824 systemd[1]: libpod-conmon-6cae86fdd68ecd858c1e9444a1648cc3d76f65e3c018d056990536c3add2028b.scope: Deactivated successfully.
Oct 10 23:22:00 np0005480824 podman[103120]: 2025-10-11 03:22:00.923457772 +0000 UTC m=+0.041073589 container create b812738070c108eb877bca61965a5d56ffc17ecfb88d9ed630bd500d05034e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:22:00 np0005480824 systemd[1]: Started libpod-conmon-b812738070c108eb877bca61965a5d56ffc17ecfb88d9ed630bd500d05034e6b.scope.
Oct 10 23:22:00 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:22:00 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e40d03651c30698f6f271a4723e0a9e5096ea44e366b1ee6a2cb9fdeecfcf70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:00 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e40d03651c30698f6f271a4723e0a9e5096ea44e366b1ee6a2cb9fdeecfcf70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:00 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e40d03651c30698f6f271a4723e0a9e5096ea44e366b1ee6a2cb9fdeecfcf70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:00 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e40d03651c30698f6f271a4723e0a9e5096ea44e366b1ee6a2cb9fdeecfcf70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:00 np0005480824 podman[103120]: 2025-10-11 03:22:00.903357383 +0000 UTC m=+0.020973220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:22:01 np0005480824 podman[103120]: 2025-10-11 03:22:01.005177336 +0000 UTC m=+0.122793233 container init b812738070c108eb877bca61965a5d56ffc17ecfb88d9ed630bd500d05034e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:22:01 np0005480824 podman[103120]: 2025-10-11 03:22:01.016726506 +0000 UTC m=+0.134342323 container start b812738070c108eb877bca61965a5d56ffc17ecfb88d9ed630bd500d05034e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:22:01 np0005480824 podman[103120]: 2025-10-11 03:22:01.021071221 +0000 UTC m=+0.138687128 container attach b812738070c108eb877bca61965a5d56ffc17ecfb88d9ed630bd500d05034e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_thompson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:22:01 np0005480824 ceph-mon[74326]: from='client.? 192.168.122.100:0/1259390569' entity='client.rgw.rgw.compute-0.bqunnq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 10 23:22:01 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct 10 23:22:01 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct 10 23:22:01 np0005480824 python3[103167]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:22:01 np0005480824 podman[103168]: 2025-10-11 03:22:01.538116346 +0000 UTC m=+0.068608827 container create 5a3a73e11b379b3a6f70dee4f5c2503ea057edf2ef86d85de9786735b8b5fa45 (image=quay.io/ceph/ceph:v18, name=silly_goldberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 10 23:22:01 np0005480824 systemd[1]: Started libpod-conmon-5a3a73e11b379b3a6f70dee4f5c2503ea057edf2ef86d85de9786735b8b5fa45.scope.
Oct 10 23:22:01 np0005480824 podman[103168]: 2025-10-11 03:22:01.507669197 +0000 UTC m=+0.038161668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:22:01 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:22:01 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33eaad702d1a519c18e0bc31996cbe4c1cc67c4b3c1746ccd42e93072b7c6768/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:01 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33eaad702d1a519c18e0bc31996cbe4c1cc67c4b3c1746ccd42e93072b7c6768/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:01 np0005480824 podman[103168]: 2025-10-11 03:22:01.634352853 +0000 UTC m=+0.164845304 container init 5a3a73e11b379b3a6f70dee4f5c2503ea057edf2ef86d85de9786735b8b5fa45 (image=quay.io/ceph/ceph:v18, name=silly_goldberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 10 23:22:01 np0005480824 podman[103168]: 2025-10-11 03:22:01.641973908 +0000 UTC m=+0.172466359 container start 5a3a73e11b379b3a6f70dee4f5c2503ea057edf2ef86d85de9786735b8b5fa45 (image=quay.io/ceph/ceph:v18, name=silly_goldberg, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 10 23:22:01 np0005480824 podman[103168]: 2025-10-11 03:22:01.645241957 +0000 UTC m=+0.175734408 container attach 5a3a73e11b379b3a6f70dee4f5c2503ea057edf2ef86d85de9786735b8b5fa45 (image=quay.io/ceph/ceph:v18, name=silly_goldberg, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:22:01 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct 10 23:22:01 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct 10 23:22:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v121: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s wr, 3 op/s
Oct 10 23:22:02 np0005480824 confident_thompson[103137]: {
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "osd_id": 0,
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "type": "bluestore"
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:    },
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "osd_id": 1,
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "type": "bluestore"
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:    },
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "osd_id": 2,
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:        "type": "bluestore"
Oct 10 23:22:02 np0005480824 confident_thompson[103137]:    }
Oct 10 23:22:02 np0005480824 confident_thompson[103137]: }
Oct 10 23:22:02 np0005480824 systemd[1]: libpod-b812738070c108eb877bca61965a5d56ffc17ecfb88d9ed630bd500d05034e6b.scope: Deactivated successfully.
Oct 10 23:22:02 np0005480824 systemd[1]: libpod-b812738070c108eb877bca61965a5d56ffc17ecfb88d9ed630bd500d05034e6b.scope: Consumed 1.028s CPU time.
Oct 10 23:22:02 np0005480824 podman[103235]: 2025-10-11 03:22:02.079982573 +0000 UTC m=+0.023286237 container died b812738070c108eb877bca61965a5d56ffc17ecfb88d9ed630bd500d05034e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:22:02 np0005480824 systemd[1]: var-lib-containers-storage-overlay-7e40d03651c30698f6f271a4723e0a9e5096ea44e366b1ee6a2cb9fdeecfcf70-merged.mount: Deactivated successfully.
Oct 10 23:22:02 np0005480824 podman[103235]: 2025-10-11 03:22:02.126577684 +0000 UTC m=+0.069881308 container remove b812738070c108eb877bca61965a5d56ffc17ecfb88d9ed630bd500d05034e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_thompson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:22:02 np0005480824 systemd[1]: libpod-conmon-b812738070c108eb877bca61965a5d56ffc17ecfb88d9ed630bd500d05034e6b.scope: Deactivated successfully.
Oct 10 23:22:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:22:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:22:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:02 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 99f971c8-a8dc-48fc-9253-4eff8b5b2fab does not exist
Oct 10 23:22:02 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev ad891f72-d1cb-4d63-b865-170a6daac715 does not exist
Oct 10 23:22:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct 10 23:22:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1643022464' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 10 23:22:02 np0005480824 silly_goldberg[103184]: 
Oct 10 23:22:02 np0005480824 systemd[1]: libpod-5a3a73e11b379b3a6f70dee4f5c2503ea057edf2ef86d85de9786735b8b5fa45.scope: Deactivated successfully.
Oct 10 23:22:02 np0005480824 silly_goldberg[103184]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Oct 10 23:22:02 np0005480824 podman[103168]: 2025-10-11 03:22:02.341489072 +0000 UTC m=+0.871981593 container died 5a3a73e11b379b3a6f70dee4f5c2503ea057edf2ef86d85de9786735b8b5fa45 (image=quay.io/ceph/ceph:v18, name=silly_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:22:02 np0005480824 systemd[1]: var-lib-containers-storage-overlay-33eaad702d1a519c18e0bc31996cbe4c1cc67c4b3c1746ccd42e93072b7c6768-merged.mount: Deactivated successfully.
Oct 10 23:22:02 np0005480824 podman[103168]: 2025-10-11 03:22:02.397310767 +0000 UTC m=+0.927803238 container remove 5a3a73e11b379b3a6f70dee4f5c2503ea057edf2ef86d85de9786735b8b5fa45 (image=quay.io/ceph/ceph:v18, name=silly_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 10 23:22:02 np0005480824 systemd[1]: libpod-conmon-5a3a73e11b379b3a6f70dee4f5c2503ea057edf2ef86d85de9786735b8b5fa45.scope: Deactivated successfully.
Oct 10 23:22:02 np0005480824 ceph-mgr[74617]: [progress INFO root] Writing back 13 completed events
Oct 10 23:22:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 10 23:22:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:03 np0005480824 podman[103483]: 2025-10-11 03:22:03.152920034 +0000 UTC m=+0.062674273 container exec a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:03 np0005480824 podman[103483]: 2025-10-11 03:22:03.245944123 +0000 UTC m=+0.155698362 container exec_died a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:22:03 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct 10 23:22:03 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:22:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v122: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 5.7 KiB/s wr, 264 op/s
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:03 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 91075990-a57a-46fa-bb64-0224e4314824 does not exist
Oct 10 23:22:03 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 53dd7e79-f73c-4bf7-9984-8f587b4e4c53 does not exist
Oct 10 23:22:03 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev f5ee2b66-aa1c-4d10-95f2-c71783358da6 does not exist
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:22:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:22:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:22:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:22:04 np0005480824 podman[103782]: 2025-10-11 03:22:04.501288683 +0000 UTC m=+0.039816708 container create ba333f6e5fe78bbea3318b47980218f32a817d96748600a788fc68dd7afafb28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noether, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:22:04 np0005480824 systemd[1]: Started libpod-conmon-ba333f6e5fe78bbea3318b47980218f32a817d96748600a788fc68dd7afafb28.scope.
Oct 10 23:22:04 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:22:04 np0005480824 podman[103782]: 2025-10-11 03:22:04.485237143 +0000 UTC m=+0.023765168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:22:04 np0005480824 podman[103782]: 2025-10-11 03:22:04.582685469 +0000 UTC m=+0.121213514 container init ba333f6e5fe78bbea3318b47980218f32a817d96748600a788fc68dd7afafb28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noether, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 23:22:04 np0005480824 podman[103782]: 2025-10-11 03:22:04.588685835 +0000 UTC m=+0.127213840 container start ba333f6e5fe78bbea3318b47980218f32a817d96748600a788fc68dd7afafb28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 23:22:04 np0005480824 podman[103782]: 2025-10-11 03:22:04.591468543 +0000 UTC m=+0.129996558 container attach ba333f6e5fe78bbea3318b47980218f32a817d96748600a788fc68dd7afafb28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 23:22:04 np0005480824 funny_noether[103799]: 167 167
Oct 10 23:22:04 np0005480824 systemd[1]: libpod-ba333f6e5fe78bbea3318b47980218f32a817d96748600a788fc68dd7afafb28.scope: Deactivated successfully.
Oct 10 23:22:04 np0005480824 podman[103782]: 2025-10-11 03:22:04.595238174 +0000 UTC m=+0.133766179 container died ba333f6e5fe78bbea3318b47980218f32a817d96748600a788fc68dd7afafb28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 10 23:22:04 np0005480824 systemd[1]: var-lib-containers-storage-overlay-012d63df49bd9277dab93dc273042e14871509502f0d3483a147f5e2fe974e9f-merged.mount: Deactivated successfully.
Oct 10 23:22:04 np0005480824 podman[103782]: 2025-10-11 03:22:04.64076359 +0000 UTC m=+0.179291605 container remove ba333f6e5fe78bbea3318b47980218f32a817d96748600a788fc68dd7afafb28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 10 23:22:04 np0005480824 systemd[1]: libpod-conmon-ba333f6e5fe78bbea3318b47980218f32a817d96748600a788fc68dd7afafb28.scope: Deactivated successfully.
Oct 10 23:22:04 np0005480824 podman[103823]: 2025-10-11 03:22:04.815947913 +0000 UTC m=+0.065199384 container create 21fd711fcb90d5a03c0b4b81e53962ac19f482b5efa93ad723348e789ac9807b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kilby, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:22:04 np0005480824 systemd[1]: Started libpod-conmon-21fd711fcb90d5a03c0b4b81e53962ac19f482b5efa93ad723348e789ac9807b.scope.
Oct 10 23:22:04 np0005480824 podman[103823]: 2025-10-11 03:22:04.787978384 +0000 UTC m=+0.037229925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:22:04 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:22:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b2be21adc5e06fb93762d12732e835cac7862798a23e7fe87147603ac7af9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b2be21adc5e06fb93762d12732e835cac7862798a23e7fe87147603ac7af9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b2be21adc5e06fb93762d12732e835cac7862798a23e7fe87147603ac7af9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b2be21adc5e06fb93762d12732e835cac7862798a23e7fe87147603ac7af9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b2be21adc5e06fb93762d12732e835cac7862798a23e7fe87147603ac7af9d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:04 np0005480824 podman[103823]: 2025-10-11 03:22:04.908408638 +0000 UTC m=+0.157660109 container init 21fd711fcb90d5a03c0b4b81e53962ac19f482b5efa93ad723348e789ac9807b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kilby, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 23:22:04 np0005480824 podman[103823]: 2025-10-11 03:22:04.92167339 +0000 UTC m=+0.170924831 container start 21fd711fcb90d5a03c0b4b81e53962ac19f482b5efa93ad723348e789ac9807b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kilby, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:22:04 np0005480824 podman[103823]: 2025-10-11 03:22:04.927762018 +0000 UTC m=+0.177013479 container attach 21fd711fcb90d5a03c0b4b81e53962ac19f482b5efa93ad723348e789ac9807b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 23:22:05 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.1c deep-scrub starts
Oct 10 23:22:05 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 3.1c deep-scrub ok
Oct 10 23:22:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v123: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 4.4 KiB/s wr, 206 op/s
Oct 10 23:22:06 np0005480824 quirky_kilby[103840]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:22:06 np0005480824 quirky_kilby[103840]: --> relative data size: 1.0
Oct 10 23:22:06 np0005480824 quirky_kilby[103840]: --> All data devices are unavailable
Oct 10 23:22:06 np0005480824 systemd[1]: libpod-21fd711fcb90d5a03c0b4b81e53962ac19f482b5efa93ad723348e789ac9807b.scope: Deactivated successfully.
Oct 10 23:22:06 np0005480824 systemd[1]: libpod-21fd711fcb90d5a03c0b4b81e53962ac19f482b5efa93ad723348e789ac9807b.scope: Consumed 1.149s CPU time.
Oct 10 23:22:06 np0005480824 podman[103823]: 2025-10-11 03:22:06.122393594 +0000 UTC m=+1.371645095 container died 21fd711fcb90d5a03c0b4b81e53962ac19f482b5efa93ad723348e789ac9807b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:22:06 np0005480824 systemd[1]: var-lib-containers-storage-overlay-72b2be21adc5e06fb93762d12732e835cac7862798a23e7fe87147603ac7af9d-merged.mount: Deactivated successfully.
Oct 10 23:22:06 np0005480824 podman[103823]: 2025-10-11 03:22:06.328126489 +0000 UTC m=+1.577377960 container remove 21fd711fcb90d5a03c0b4b81e53962ac19f482b5efa93ad723348e789ac9807b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kilby, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 10 23:22:06 np0005480824 systemd[1]: libpod-conmon-21fd711fcb90d5a03c0b4b81e53962ac19f482b5efa93ad723348e789ac9807b.scope: Deactivated successfully.
Oct 10 23:22:07 np0005480824 podman[104022]: 2025-10-11 03:22:07.117833204 +0000 UTC m=+0.072284097 container create 9a0af962fa5d1f22d2461e1bdcb9e30c2f745cf6b17d632b0a71e1e5a4c181d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chandrasekhar, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 10 23:22:07 np0005480824 systemd[1]: Started libpod-conmon-9a0af962fa5d1f22d2461e1bdcb9e30c2f745cf6b17d632b0a71e1e5a4c181d2.scope.
Oct 10 23:22:07 np0005480824 podman[104022]: 2025-10-11 03:22:07.091478544 +0000 UTC m=+0.045929487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:22:07 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:22:07 np0005480824 podman[104022]: 2025-10-11 03:22:07.206772213 +0000 UTC m=+0.161223156 container init 9a0af962fa5d1f22d2461e1bdcb9e30c2f745cf6b17d632b0a71e1e5a4c181d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 23:22:07 np0005480824 podman[104022]: 2025-10-11 03:22:07.214417859 +0000 UTC m=+0.168868742 container start 9a0af962fa5d1f22d2461e1bdcb9e30c2f745cf6b17d632b0a71e1e5a4c181d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chandrasekhar, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:22:07 np0005480824 podman[104022]: 2025-10-11 03:22:07.220546397 +0000 UTC m=+0.174997340 container attach 9a0af962fa5d1f22d2461e1bdcb9e30c2f745cf6b17d632b0a71e1e5a4c181d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chandrasekhar, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 23:22:07 np0005480824 strange_chandrasekhar[104039]: 167 167
Oct 10 23:22:07 np0005480824 systemd[1]: libpod-9a0af962fa5d1f22d2461e1bdcb9e30c2f745cf6b17d632b0a71e1e5a4c181d2.scope: Deactivated successfully.
Oct 10 23:22:07 np0005480824 podman[104022]: 2025-10-11 03:22:07.22478779 +0000 UTC m=+0.179238683 container died 9a0af962fa5d1f22d2461e1bdcb9e30c2f745cf6b17d632b0a71e1e5a4c181d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:22:07 np0005480824 systemd[1]: var-lib-containers-storage-overlay-5696d2f5d83bbe02d5fdbe0ba991628fc0a68681d284dcd1f8858b5a148adc08-merged.mount: Deactivated successfully.
Oct 10 23:22:07 np0005480824 podman[104022]: 2025-10-11 03:22:07.267882127 +0000 UTC m=+0.222332990 container remove 9a0af962fa5d1f22d2461e1bdcb9e30c2f745cf6b17d632b0a71e1e5a4c181d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 10 23:22:07 np0005480824 systemd[1]: libpod-conmon-9a0af962fa5d1f22d2461e1bdcb9e30c2f745cf6b17d632b0a71e1e5a4c181d2.scope: Deactivated successfully.
Oct 10 23:22:07 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.b deep-scrub starts
Oct 10 23:22:07 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.b deep-scrub ok
Oct 10 23:22:07 np0005480824 podman[104063]: 2025-10-11 03:22:07.45826888 +0000 UTC m=+0.048099499 container create 6a8942363896f4f23ac44203054e4061b266eb0c4e017d502a8fc82c9139f908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 23:22:07 np0005480824 systemd[1]: Started libpod-conmon-6a8942363896f4f23ac44203054e4061b266eb0c4e017d502a8fc82c9139f908.scope.
Oct 10 23:22:07 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:22:07 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5303a42617cb06731822fc2d83f342183755d2c1f37bae7c930823ff54872b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:07 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5303a42617cb06731822fc2d83f342183755d2c1f37bae7c930823ff54872b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:07 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5303a42617cb06731822fc2d83f342183755d2c1f37bae7c930823ff54872b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:07 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5303a42617cb06731822fc2d83f342183755d2c1f37bae7c930823ff54872b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:07 np0005480824 podman[104063]: 2025-10-11 03:22:07.441622375 +0000 UTC m=+0.031452974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:22:07 np0005480824 podman[104063]: 2025-10-11 03:22:07.544397891 +0000 UTC m=+0.134228560 container init 6a8942363896f4f23ac44203054e4061b266eb0c4e017d502a8fc82c9139f908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:22:07 np0005480824 podman[104063]: 2025-10-11 03:22:07.555410758 +0000 UTC m=+0.145241357 container start 6a8942363896f4f23ac44203054e4061b266eb0c4e017d502a8fc82c9139f908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:22:07 np0005480824 podman[104063]: 2025-10-11 03:22:07.559739703 +0000 UTC m=+0.149570292 container attach 6a8942363896f4f23ac44203054e4061b266eb0c4e017d502a8fc82c9139f908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 10 23:22:07 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct 10 23:22:07 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct 10 23:22:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v124: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 3.9 KiB/s wr, 182 op/s
Oct 10 23:22:08 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct 10 23:22:08 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]: {
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:    "0": [
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:        {
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "devices": [
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "/dev/loop3"
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            ],
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_name": "ceph_lv0",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_size": "21470642176",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "name": "ceph_lv0",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "tags": {
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.cluster_name": "ceph",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.crush_device_class": "",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.encrypted": "0",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.osd_id": "0",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.type": "block",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.vdo": "0"
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            },
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "type": "block",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "vg_name": "ceph_vg0"
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:        }
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:    ],
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:    "1": [
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:        {
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "devices": [
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "/dev/loop4"
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            ],
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_name": "ceph_lv1",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_size": "21470642176",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "name": "ceph_lv1",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "tags": {
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.cluster_name": "ceph",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.crush_device_class": "",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.encrypted": "0",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.osd_id": "1",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.type": "block",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.vdo": "0"
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            },
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "type": "block",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "vg_name": "ceph_vg1"
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:        }
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:    ],
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:    "2": [
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:        {
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "devices": [
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "/dev/loop5"
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            ],
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_name": "ceph_lv2",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_size": "21470642176",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "name": "ceph_lv2",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "tags": {
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.cluster_name": "ceph",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.crush_device_class": "",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.encrypted": "0",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.osd_id": "2",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.type": "block",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:                "ceph.vdo": "0"
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            },
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "type": "block",
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:            "vg_name": "ceph_vg2"
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:        }
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]:    ]
Oct 10 23:22:08 np0005480824 dazzling_ritchie[104079]: }
Oct 10 23:22:08 np0005480824 systemd[1]: libpod-6a8942363896f4f23ac44203054e4061b266eb0c4e017d502a8fc82c9139f908.scope: Deactivated successfully.
Oct 10 23:22:08 np0005480824 podman[104063]: 2025-10-11 03:22:08.320229419 +0000 UTC m=+0.910059998 container died 6a8942363896f4f23ac44203054e4061b266eb0c4e017d502a8fc82c9139f908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:22:08 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ce5303a42617cb06731822fc2d83f342183755d2c1f37bae7c930823ff54872b-merged.mount: Deactivated successfully.
Oct 10 23:22:08 np0005480824 podman[104063]: 2025-10-11 03:22:08.382581332 +0000 UTC m=+0.972411931 container remove 6a8942363896f4f23ac44203054e4061b266eb0c4e017d502a8fc82c9139f908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:22:08 np0005480824 systemd[1]: libpod-conmon-6a8942363896f4f23ac44203054e4061b266eb0c4e017d502a8fc82c9139f908.scope: Deactivated successfully.
Oct 10 23:22:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:22:08 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct 10 23:22:08 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct 10 23:22:09 np0005480824 podman[104239]: 2025-10-11 03:22:09.117769333 +0000 UTC m=+0.050745223 container create 662b5666af57f6ce134aa9f2e30f018b8bbd801921dfb72c03bfdac526333aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 10 23:22:09 np0005480824 systemd[1]: Started libpod-conmon-662b5666af57f6ce134aa9f2e30f018b8bbd801921dfb72c03bfdac526333aa4.scope.
Oct 10 23:22:09 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:22:09 np0005480824 podman[104239]: 2025-10-11 03:22:09.094248771 +0000 UTC m=+0.027224671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:22:09 np0005480824 podman[104239]: 2025-10-11 03:22:09.196047623 +0000 UTC m=+0.129023493 container init 662b5666af57f6ce134aa9f2e30f018b8bbd801921dfb72c03bfdac526333aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 10 23:22:09 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 10 23:22:09 np0005480824 podman[104239]: 2025-10-11 03:22:09.207541102 +0000 UTC m=+0.140516972 container start 662b5666af57f6ce134aa9f2e30f018b8bbd801921dfb72c03bfdac526333aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_volhard, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:22:09 np0005480824 interesting_volhard[104256]: 167 167
Oct 10 23:22:09 np0005480824 systemd[1]: libpod-662b5666af57f6ce134aa9f2e30f018b8bbd801921dfb72c03bfdac526333aa4.scope: Deactivated successfully.
Oct 10 23:22:09 np0005480824 podman[104239]: 2025-10-11 03:22:09.215627869 +0000 UTC m=+0.148603719 container attach 662b5666af57f6ce134aa9f2e30f018b8bbd801921dfb72c03bfdac526333aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_volhard, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:22:09 np0005480824 podman[104239]: 2025-10-11 03:22:09.216081689 +0000 UTC m=+0.149057549 container died 662b5666af57f6ce134aa9f2e30f018b8bbd801921dfb72c03bfdac526333aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:22:09 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 10 23:22:09 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a0c0e8e49e1777514b0635155ffc14572f9a1452657d8079d0960e53cdec3589-merged.mount: Deactivated successfully.
Oct 10 23:22:09 np0005480824 podman[104239]: 2025-10-11 03:22:09.266851483 +0000 UTC m=+0.199827353 container remove 662b5666af57f6ce134aa9f2e30f018b8bbd801921dfb72c03bfdac526333aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 23:22:09 np0005480824 systemd[1]: libpod-conmon-662b5666af57f6ce134aa9f2e30f018b8bbd801921dfb72c03bfdac526333aa4.scope: Deactivated successfully.
Oct 10 23:22:09 np0005480824 podman[104281]: 2025-10-11 03:22:09.475476938 +0000 UTC m=+0.080002223 container create 7e95e25ec4075e8fdf634d934f171c7cb4cab7aa2b69a720db80ad26d80f64e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chandrasekhar, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:22:09 np0005480824 systemd[1]: Started libpod-conmon-7e95e25ec4075e8fdf634d934f171c7cb4cab7aa2b69a720db80ad26d80f64e6.scope.
Oct 10 23:22:09 np0005480824 podman[104281]: 2025-10-11 03:22:09.436715257 +0000 UTC m=+0.041240552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:22:09 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:22:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6872bf8fdc4f2b86e61dd6f6391878da0ca67bedf0c7f60e84e4753fd4b945/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6872bf8fdc4f2b86e61dd6f6391878da0ca67bedf0c7f60e84e4753fd4b945/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6872bf8fdc4f2b86e61dd6f6391878da0ca67bedf0c7f60e84e4753fd4b945/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6872bf8fdc4f2b86e61dd6f6391878da0ca67bedf0c7f60e84e4753fd4b945/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:09 np0005480824 podman[104281]: 2025-10-11 03:22:09.586726649 +0000 UTC m=+0.191251914 container init 7e95e25ec4075e8fdf634d934f171c7cb4cab7aa2b69a720db80ad26d80f64e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 10 23:22:09 np0005480824 podman[104281]: 2025-10-11 03:22:09.593885343 +0000 UTC m=+0.198410588 container start 7e95e25ec4075e8fdf634d934f171c7cb4cab7aa2b69a720db80ad26d80f64e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 23:22:09 np0005480824 podman[104281]: 2025-10-11 03:22:09.597715616 +0000 UTC m=+0.202240881 container attach 7e95e25ec4075e8fdf634d934f171c7cb4cab7aa2b69a720db80ad26d80f64e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 23:22:09 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 10 23:22:09 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 10 23:22:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.4 KiB/s wr, 158 op/s
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]: {
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "osd_id": 0,
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "type": "bluestore"
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:    },
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "osd_id": 1,
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "type": "bluestore"
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:    },
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "osd_id": 2,
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:        "type": "bluestore"
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]:    }
Oct 10 23:22:10 np0005480824 interesting_chandrasekhar[104298]: }
Oct 10 23:22:10 np0005480824 systemd[1]: libpod-7e95e25ec4075e8fdf634d934f171c7cb4cab7aa2b69a720db80ad26d80f64e6.scope: Deactivated successfully.
Oct 10 23:22:10 np0005480824 podman[104281]: 2025-10-11 03:22:10.608232232 +0000 UTC m=+1.212757487 container died 7e95e25ec4075e8fdf634d934f171c7cb4cab7aa2b69a720db80ad26d80f64e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 10 23:22:10 np0005480824 systemd[1]: libpod-7e95e25ec4075e8fdf634d934f171c7cb4cab7aa2b69a720db80ad26d80f64e6.scope: Consumed 1.024s CPU time.
Oct 10 23:22:10 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0a6872bf8fdc4f2b86e61dd6f6391878da0ca67bedf0c7f60e84e4753fd4b945-merged.mount: Deactivated successfully.
Oct 10 23:22:10 np0005480824 podman[104281]: 2025-10-11 03:22:10.677454932 +0000 UTC m=+1.281980207 container remove 7e95e25ec4075e8fdf634d934f171c7cb4cab7aa2b69a720db80ad26d80f64e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:22:10 np0005480824 systemd[1]: libpod-conmon-7e95e25ec4075e8fdf634d934f171c7cb4cab7aa2b69a720db80ad26d80f64e6.scope: Deactivated successfully.
Oct 10 23:22:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:22:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:22:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:10 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 8f34617b-a10c-4dd1-a221-02bdd6a72fdc does not exist
Oct 10 23:22:10 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev c9323610-81e8-4c16-8b0a-11dae92bf96b does not exist
Oct 10 23:22:11 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct 10 23:22:11 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct 10 23:22:11 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:11 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v126: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.9 KiB/s wr, 135 op/s
Oct 10 23:22:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:22:13 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct 10 23:22:13 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct 10 23:22:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 2.8 KiB/s wr, 132 op/s
Oct 10 23:22:14 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 10 23:22:14 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 10 23:22:15 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 10 23:22:15 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 10 23:22:15 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct 10 23:22:15 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct 10 23:22:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:16 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct 10 23:22:16 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct 10 23:22:17 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct 10 23:22:17 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct 10 23:22:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:22:18 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.16 deep-scrub starts
Oct 10 23:22:18 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.16 deep-scrub ok
Oct 10 23:22:19 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct 10 23:22:19 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct 10 23:22:19 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct 10 23:22:19 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct 10 23:22:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:20 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 10 23:22:20 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 10 23:22:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:23 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.10 deep-scrub starts
Oct 10 23:22:23 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.10 deep-scrub ok
Oct 10 23:22:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:22:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:24 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct 10 23:22:24 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct 10 23:22:25 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct 10 23:22:25 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct 10 23:22:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:22:27
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'backups', '.rgw.root', 'volumes', 'images', 'default.rgw.meta', 'vms']
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:22:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:22:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:22:29 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.1e deep-scrub starts
Oct 10 23:22:29 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 7.1e deep-scrub ok
Oct 10 23:22:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:30 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 10 23:22:30 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 10 23:22:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v136: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:32 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct 10 23:22:32 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct 10 23:22:32 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 10 23:22:32 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 10 23:22:33 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 10 23:22:33 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 10 23:22:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct 10 23:22:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct 10 23:22:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:22:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct 10 23:22:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:22:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct 10 23:22:34 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:22:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct 10 23:22:34 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev 73abb65b-90e6-47c7-a97b-3b7f3e9eafd7 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 10 23:22:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct 10 23:22:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:22:34 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Oct 10 23:22:34 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Oct 10 23:22:34 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct 10 23:22:34 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct 10 23:22:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct 10 23:22:35 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Oct 10 23:22:35 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Oct 10 23:22:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:22:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct 10 23:22:35 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct 10 23:22:35 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev a1e57117-068f-4c8e-a5ee-65c3940f98e0 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 10 23:22:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct 10 23:22:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:22:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:22:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:22:35 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct 10 23:22:35 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct 10 23:22:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v140: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 10 23:22:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:22:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 10 23:22:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct 10 23:22:36 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev 83c8cc05-a3e1-4c0a-97b7-dccb45836ce0 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:22:36 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 23:22:36 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 10 23:22:36 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 10 23:22:36 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 56 pg[9.0( v 53'585 (0'0,53'585] local-lis/les=47/48 n=209 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=14.631576538s) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 53'584 mlcod 53'584 active pruub 135.675842285s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:36 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 56 pg[8.0( v 46'4 (0'0,46'4] local-lis/les=45/46 n=4 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=12.622286797s) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 46'3 mlcod 46'3 active pruub 133.666778564s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:36 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 56 pg[8.0( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=12.622286797s) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 46'3 mlcod 0'0 unknown pruub 133.666778564s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:36 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 56 pg[9.0( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=14.631576538s) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 53'584 mlcod 0'0 unknown pruub 135.675842285s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Oct 10 23:22:37 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Oct 10 23:22:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct 10 23:22:37 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct 10 23:22:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:22:37 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct 10 23:22:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct 10 23:22:37 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct 10 23:22:37 np0005480824 ceph-mgr[74617]: [progress INFO root] update: starting ev 4c928af1-6626-473d-8a42-906a8ede8754 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 10 23:22:37 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev 73abb65b-90e6-47c7-a97b-3b7f3e9eafd7 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 10 23:22:37 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 73abb65b-90e6-47c7-a97b-3b7f3e9eafd7 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Oct 10 23:22:37 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev a1e57117-068f-4c8e-a5ee-65c3940f98e0 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 10 23:22:37 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event a1e57117-068f-4c8e-a5ee-65c3940f98e0 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Oct 10 23:22:37 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev 83c8cc05-a3e1-4c0a-97b7-dccb45836ce0 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 10 23:22:37 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 83c8cc05-a3e1-4c0a-97b7-dccb45836ce0 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Oct 10 23:22:37 np0005480824 ceph-mgr[74617]: [progress INFO root] complete: finished ev 4c928af1-6626-473d-8a42-906a8ede8754 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 10 23:22:37 np0005480824 ceph-mgr[74617]: [progress INFO root] Completed event 4c928af1-6626-473d-8a42-906a8ede8754 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.15( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.14( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.14( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.15( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.16( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.17( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.17( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.16( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.10( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.11( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.1( v 46'4 (0'0,46'4] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.2( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.3( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.3( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.2( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.d( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.c( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.d( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.c( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.e( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.f( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.8( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.9( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.a( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.b( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.f( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.e( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.b( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.a( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.8( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.9( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.1( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.6( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.7( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.7( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.6( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.5( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.4( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.4( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.5( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.1a( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.1b( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.18( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.19( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.19( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.1e( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.18( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.1f( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.1f( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.1e( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.1c( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.1d( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.1d( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.13( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.1c( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.12( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.12( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.13( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.11( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.1b( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.10( v 53'585 lc 0'0 (0'0,53'585] local-lis/les=47/48 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.1a( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.14( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.16( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.17( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.0( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 53'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.11( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.10( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.1( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.2( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.3( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.3( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.d( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.d( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.2( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.c( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.14( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.e( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.a( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.8( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.b( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.e( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.a( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.8( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.0( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 46'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.1( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.7( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.6( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.9( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.6( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.5( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.4( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.5( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.4( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.1b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.1a( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.18( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.18( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.19( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.1f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.1e( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.13( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.1d( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.1c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.1d( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.12( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.12( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.1b( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.9( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[9.10( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=53'585 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.1a( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.11( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 57 pg[8.15( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v143: 259 pgs: 2 peering, 62 unknown, 195 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 10 23:22:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:22:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 10 23:22:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:22:37 np0005480824 ceph-mgr[74617]: [progress INFO root] Writing back 17 completed events
Oct 10 23:22:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 10 23:22:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:38 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Oct 10 23:22:38 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Oct 10 23:22:38 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Oct 10 23:22:38 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Oct 10 23:22:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct 10 23:22:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:22:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:22:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct 10 23:22:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct 10 23:22:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 10 23:22:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:22:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 23:22:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:22:39 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct 10 23:22:39 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct 10 23:22:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v145: 321 pgs: 2 peering, 124 unknown, 195 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:39 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:22:39 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 23:22:40 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Oct 10 23:22:40 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Oct 10 23:22:41 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 58 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.981419563s) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active pruub 139.705749512s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:41 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 58 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=58 pruub=13.981419563s) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown pruub 139.705749512s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:41 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 58 pg[10.0( v 50'16 (0'0,50'16] local-lis/les=49/50 n=8 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.960536957s) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 50'15 mlcod 50'15 active pruub 132.621780396s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:41 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 58 pg[10.0( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=11.960536957s) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 50'15 mlcod 0'0 unknown pruub 132.621780396s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v146: 321 pgs: 2 peering, 124 unknown, 195 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct 10 23:22:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct 10 23:22:42 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.1b( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.b( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.1e( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.a( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.d( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.13( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.12( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.11( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.1f( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.10( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.1d( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.1c( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.1a( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.19( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.18( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.7( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.6( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.5( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.4( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.8( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.f( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.9( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.c( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.e( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.1( v 50'16 (0'0,50'16] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.2( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.3( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.14( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.15( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.16( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.17( v 50'16 lc 0'0 (0'0,50'16] local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Oct 10 23:22:42 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.17( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.16( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.15( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.2( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.13( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.d( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.b( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.9( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.c( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.e( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.a( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.3( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.8( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.1b( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.6( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.18( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.1f( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.10( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.11( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.19( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=51/52 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.17( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.b( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.a( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.11( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.1f( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.10( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.1d( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.1c( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.d( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.1e( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.13( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.1a( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.12( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.19( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.18( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.7( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.6( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.4( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.8( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.f( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.9( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.0( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 50'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.e( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.c( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.1( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.2( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.14( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.5( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.16( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.17( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.15( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 59 pg[10.3( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [2] r=0 lpr=58 pi=[49,58)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.15( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.16( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.0( empty local-lis/les=58/59 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.2( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.13( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.d( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.b( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.9( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.c( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.e( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.a( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.3( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.8( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.6( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.18( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.1f( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.10( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.11( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.19( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=51/51 les/c/f=52/52/0 sis=58) [1] r=0 lpr=58 pi=[51,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:42 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Oct 10 23:22:43 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct 10 23:22:43 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct 10 23:22:43 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct 10 23:22:43 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct 10 23:22:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:22:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 1 peering, 32 activating, 31 unknown, 257 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:44 np0005480824 python3[104421]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:22:44 np0005480824 podman[104422]: 2025-10-11 03:22:44.8687812 +0000 UTC m=+0.092811044 container create 0dcd7ddeb7a523fe52c87a644a9276ec52c9b40b5390e1af9b9c66233cbe35d3 (image=quay.io/ceph/ceph:v18, name=eager_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:22:44 np0005480824 podman[104422]: 2025-10-11 03:22:44.802535272 +0000 UTC m=+0.026565166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:22:44 np0005480824 systemd[1]: Started libpod-conmon-0dcd7ddeb7a523fe52c87a644a9276ec52c9b40b5390e1af9b9c66233cbe35d3.scope.
Oct 10 23:22:44 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:22:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ee5cd0d812bedd84f355c58134db9b7bff3b4c065fba804c5791d94d22f1e80/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ee5cd0d812bedd84f355c58134db9b7bff3b4c065fba804c5791d94d22f1e80/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:45 np0005480824 podman[104422]: 2025-10-11 03:22:45.090771831 +0000 UTC m=+0.314801725 container init 0dcd7ddeb7a523fe52c87a644a9276ec52c9b40b5390e1af9b9c66233cbe35d3 (image=quay.io/ceph/ceph:v18, name=eager_rubin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 10 23:22:45 np0005480824 podman[104422]: 2025-10-11 03:22:45.098501038 +0000 UTC m=+0.322530842 container start 0dcd7ddeb7a523fe52c87a644a9276ec52c9b40b5390e1af9b9c66233cbe35d3 (image=quay.io/ceph/ceph:v18, name=eager_rubin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 10 23:22:45 np0005480824 podman[104422]: 2025-10-11 03:22:45.109779592 +0000 UTC m=+0.333809476 container attach 0dcd7ddeb7a523fe52c87a644a9276ec52c9b40b5390e1af9b9c66233cbe35d3 (image=quay.io/ceph/ceph:v18, name=eager_rubin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:22:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v149: 321 pgs: 1 peering, 32 activating, 31 unknown, 257 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:22:45 np0005480824 eager_rubin[104438]: could not fetch user info: no user info saved
Oct 10 23:22:46 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct 10 23:22:46 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct 10 23:22:46 np0005480824 systemd[1]: libpod-0dcd7ddeb7a523fe52c87a644a9276ec52c9b40b5390e1af9b9c66233cbe35d3.scope: Deactivated successfully.
Oct 10 23:22:46 np0005480824 podman[104422]: 2025-10-11 03:22:46.893351167 +0000 UTC m=+2.117381011 container died 0dcd7ddeb7a523fe52c87a644a9276ec52c9b40b5390e1af9b9c66233cbe35d3 (image=quay.io/ceph/ceph:v18, name=eager_rubin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:22:47 np0005480824 systemd[1]: var-lib-containers-storage-overlay-2ee5cd0d812bedd84f355c58134db9b7bff3b4c065fba804c5791d94d22f1e80-merged.mount: Deactivated successfully.
Oct 10 23:22:47 np0005480824 podman[104422]: 2025-10-11 03:22:47.380571647 +0000 UTC m=+2.604601461 container remove 0dcd7ddeb7a523fe52c87a644a9276ec52c9b40b5390e1af9b9c66233cbe35d3 (image=quay.io/ceph/ceph:v18, name=eager_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 10 23:22:47 np0005480824 systemd[1]: libpod-conmon-0dcd7ddeb7a523fe52c87a644a9276ec52c9b40b5390e1af9b9c66233cbe35d3.scope: Deactivated successfully.
Oct 10 23:22:47 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Oct 10 23:22:47 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Oct 10 23:22:47 np0005480824 python3[104561]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 92cfe4d4-4917-5be1-9d00-73758793a62b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:22:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v150: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 782 B/s rd, 0 op/s
Oct 10 23:22:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 10 23:22:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:22:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 10 23:22:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:22:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 10 23:22:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 10 23:22:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 10 23:22:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:22:47 np0005480824 podman[104562]: 2025-10-11 03:22:47.898734388 +0000 UTC m=+0.039458268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 10 23:22:48 np0005480824 podman[104562]: 2025-10-11 03:22:48.064294398 +0000 UTC m=+0.205018248 container create cf584a3d10030ddc7ec0d954648eed434a2bf395a961e2c93ef8fb3c0754e0a5 (image=quay.io/ceph/ceph:v18, name=frosty_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 10 23:22:48 np0005480824 systemd[1]: Started libpod-conmon-cf584a3d10030ddc7ec0d954648eed434a2bf395a961e2c93ef8fb3c0754e0a5.scope.
Oct 10 23:22:48 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:22:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6525d8c817e2aa592366a393395f38aa6635d2e1d56bc776b2669ea70d7fa8b8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6525d8c817e2aa592366a393395f38aa6635d2e1d56bc776b2669ea70d7fa8b8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:22:48 np0005480824 podman[104562]: 2025-10-11 03:22:48.365964303 +0000 UTC m=+0.506688213 container init cf584a3d10030ddc7ec0d954648eed434a2bf395a961e2c93ef8fb3c0754e0a5 (image=quay.io/ceph/ceph:v18, name=frosty_nash, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 10 23:22:48 np0005480824 podman[104562]: 2025-10-11 03:22:48.376687534 +0000 UTC m=+0.517411404 container start cf584a3d10030ddc7ec0d954648eed434a2bf395a961e2c93ef8fb3c0754e0a5 (image=quay.io/ceph/ceph:v18, name=frosty_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 10 23:22:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct 10 23:22:48 np0005480824 podman[104562]: 2025-10-11 03:22:48.716989417 +0000 UTC m=+0.857713297 container attach cf584a3d10030ddc7ec0d954648eed434a2bf395a961e2c93ef8fb3c0754e0a5 (image=quay.io/ceph/ceph:v18, name=frosty_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:22:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:22:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:22:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 10 23:22:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:22:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct 10 23:22:48 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.17( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.565822601s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.012603760s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733839989s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.180694580s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.15( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.735992432s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.182846069s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.17( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.565765381s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.012603760s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.15( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.696125984s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143066406s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.15( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.735920906s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.182846069s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.15( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.696075439s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143066406s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733778000s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.180694580s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.14( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.704552650s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.151397705s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733426094s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.180572510s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.14( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.704266548s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.151397705s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695943832s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143142700s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733395576s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.180572510s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695918083s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143142700s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.11( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733132362s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.180557251s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.11( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733109474s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.180557251s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695920944s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143386841s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695900917s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143386841s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.2( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695878983s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143402100s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.10( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733067513s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.180603027s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.2( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733144760s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.180694580s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.3( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733189583s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.180770874s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.2( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695829391s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143402100s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.2( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733111382s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.180694580s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.3( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733168602s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.180770874s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695944786s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143615723s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.10( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732942581s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.180603027s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695923805s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143615723s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.d( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733153343s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.180938721s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733319283s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.181121826s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.e( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695976257s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143798828s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.d( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733201027s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.181030273s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.d( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733126640s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.180938721s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.e( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695948601s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143798828s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.d( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733176231s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181030273s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733296394s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181121826s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.d( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695755005s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143707275s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.e( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733117104s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.181137085s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733268738s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.181304932s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.b( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695704460s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143768311s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.e( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733092308s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181137085s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.b( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695681572s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143768311s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.733236313s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181304932s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.9( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.734098434s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.182266235s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.d( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695735931s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143707275s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.9( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.734072685s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.182266235s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.9( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695567131s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143768311s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.9( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695513725s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143768311s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732902527s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.181259155s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.b( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732934952s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.181243896s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.8( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695450783s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143844604s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.b( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732859612s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181243896s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732853889s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181259155s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732971191s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.181396484s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.8( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695424080s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143844604s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732949257s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181396484s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.9( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732993126s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.181503296s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.3( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695298195s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143829346s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.3( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695279121s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143829346s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.9( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732968330s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181503296s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.1( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732836723s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.181457520s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695216179s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143859863s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.1( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732811928s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181457520s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695199013s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143859863s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.6( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732914925s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.181655884s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.6( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732894897s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181655884s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732845306s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.181610107s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732817650s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181610107s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.6( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694993019s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143875122s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.18( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695011139s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143951416s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.6( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694951057s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143875122s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.5( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732690811s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.181655884s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.18( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694990158s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143951416s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.4( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732691765s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.181671143s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.5( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732666969s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181655884s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.4( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732670784s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181671143s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.1b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732645035s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.181701660s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.1b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732618332s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181701660s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.695014954s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.144134521s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732585907s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.181732178s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694997787s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.144134521s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.18( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732551575s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.181716919s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.18( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732524872s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181716919s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694771767s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143997192s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694739342s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.143981934s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732559204s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181732178s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694748878s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143997192s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694718361s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.143981934s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.1f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732516289s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.181884766s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732331276s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.181732178s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.1f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732488632s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181884766s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732308388s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181732178s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694570541s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.144012451s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694543839s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.144012451s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.1d( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732532501s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.182052612s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.1d( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732506752s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.182052612s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.1f( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694247246s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.144042969s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.1f( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694225311s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.144042969s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.1d( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732118607s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.181976318s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.1c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732118607s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.181991577s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.1d( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732089996s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181976318s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.10( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694173813s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.144073486s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.1c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732090950s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.181991577s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.10( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694149017s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.144073486s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.11( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694048882s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.144073486s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.11( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.694025993s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.144073486s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.12( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732029915s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.182128906s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.693954468s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.144073486s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.12( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732009888s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.182128906s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.731986046s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.182144165s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.693918228s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.144073486s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.11( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732112885s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.182312012s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.731955528s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.182144165s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.19( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.693914413s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 143.144104004s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.1b( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732025146s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 146.182266235s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[11.19( empty local-lis/les=58/59 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.693873405s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.144104004s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.11( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732078552s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.182312012s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.1a( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.731988907s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 146.182312012s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[8.1a( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.731948853s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.182312012s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:22:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[9.1b( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=12.732007980s) [0] r=-1 lpr=60 pi=[56,60)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.182266235s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:48 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:22:48 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 10 23:22:48 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[8.15( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.15( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.2( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.3( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[8.2( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.d( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.8( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[8.d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.9( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[8.4( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.18( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[8.1b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.1b( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.1c( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.1e( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[8.12( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.11( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.12( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[8.11( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.b( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.1a( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[11.1f( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[8.1c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.1e( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.361805916s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.942398071s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.1e( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.361758232s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.942398071s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.b( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.360659599s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.941711426s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.b( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.360626221s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.941711426s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.13( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.360742569s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.941848755s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.13( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.360684395s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.941848755s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.d( v 59'20 (0'0,59'20] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.360189438s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=59'20 lcod 59'19 mlcod 59'19 active pruub 137.941833496s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.d( v 59'20 (0'0,59'20] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.360114098s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=59'20 lcod 59'19 mlcod 0'0 unknown NOTIFY pruub 137.941833496s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.10( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.360284805s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.942184448s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.11( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.360486031s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.942123413s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.10( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.360175133s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.942184448s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.19( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.360065460s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.942443848s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.11( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.360172272s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.942123413s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.12( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.359498024s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.941879272s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.1a( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.359905243s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.942428589s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.19( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.359741211s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.942443848s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.1a( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.359658241s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.942428589s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.12( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.358980179s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.941879272s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.6( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.359445572s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.942535400s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[10.13( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[10.b( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.8( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.359159470s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.942581177s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.8( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.359128952s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.942581177s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.6( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.359399796s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.942535400s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.7( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.358851433s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.942504883s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.f( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.358809471s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.942596436s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.f( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.358778954s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.942596436s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.9( v 59'20 (0'0,59'20] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.358572960s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=59'20 lcod 59'19 mlcod 59'19 active pruub 137.942626953s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.9( v 59'20 (0'0,59'20] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.358523369s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=59'20 lcod 59'19 mlcod 0'0 unknown NOTIFY pruub 137.942626953s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.e( v 59'20 (0'0,59'20] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.358500481s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=59'20 lcod 59'19 mlcod 59'19 active pruub 137.942657471s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.e( v 59'20 (0'0,59'20] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.358454704s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=59'20 lcod 59'19 mlcod 0'0 unknown NOTIFY pruub 137.942657471s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.1( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.358383179s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.942703247s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.1( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.358327866s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.942703247s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.7( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.358439445s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.942504883s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.2( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.358058929s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.942718506s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.2( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.357996941s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.942718506s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.4( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.359351158s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.942550659s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[10.10( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.4( v 50'16 (0'0,50'16] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.357680321s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.942550659s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.14( v 59'20 (0'0,59'20] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.357716560s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=59'20 lcod 59'19 mlcod 59'19 active pruub 137.942749023s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.14( v 59'20 (0'0,59'20] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.357677460s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=59'20 lcod 59'19 mlcod 0'0 unknown NOTIFY pruub 137.942749023s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[10.11( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.17( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.357270241s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.942825317s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.17( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.357239723s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.942825317s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.16( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.356667519s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active pruub 137.942810059s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.16( v 50'16 (0'0,50'16] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.356617928s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.942810059s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[10.19( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.15( v 59'20 (0'0,59'20] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.356365204s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=59'20 lcod 59'19 mlcod 59'19 active pruub 137.942825317s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:49 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[10.1a( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 60 pg[10.15( v 59'20 (0'0,59'20] local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=9.356065750s) [0] r=-1 lpr=60 pi=[58,60)/1 crt=59'20 lcod 59'19 mlcod 0'0 unknown NOTIFY pruub 137.942825317s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:49 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[10.12( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[10.6( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[10.f( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[10.2( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 60 pg[10.14( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[11.10( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.11( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[8.10( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.5( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[10.9( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[8.b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[11.4( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[10.8( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[10.15( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[11.14( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[8.6( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[10.4( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[8.9( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[11.6( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[10.7( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.9( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[10.17( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[10.d( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[8.f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[11.e( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[8.e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[10.e( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[11.f( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[8.c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.1( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[11.1( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.3( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[10.1e( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[8.18( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[11.19( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[8.1a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.1b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[11.17( empty local-lis/les=0/0 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[8.14( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[10.16( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[8.1f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[8.1d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 60 pg[10.1( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:49 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct 10 23:22:49 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct 10 23:22:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v152: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 0 op/s
Oct 10 23:22:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 10 23:22:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 10 23:22:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct 10 23:22:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 10 23:22:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct 10 23:22:50 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.11( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.11( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.3( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:22:50 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 10 23:22:50 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.3( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.d( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.d( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.9( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.9( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.b( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.b( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.1( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.1( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.5( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.5( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.1d( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.1d( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.1b( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[9.1b( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.11( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.5( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.5( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.11( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.1( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.1( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.3( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.3( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.1d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.9( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.9( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.1b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.1b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[8.15( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.d( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.15( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.9( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[8.d( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[8.2( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=60/61 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=46'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[8.10( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[11.10( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[11.4( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[10.8( v 50'16 (0'0,50'16] local-lis/les=60/61 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[11.14( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[10.15( v 59'20 lc 59'19 (0'0,59'20] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=59'20 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[10.4( v 50'16 (0'0,50'16] local-lis/les=60/61 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[8.6( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[11.6( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[8.b( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[10.7( v 50'16 (0'0,50'16] local-lis/les=60/61 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[10.9( v 59'20 lc 59'19 (0'0,59'20] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=59'20 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[10.f( v 50'16 (0'0,50'16] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[10.14( v 59'20 lc 59'19 (0'0,59'20] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=59'20 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[10.12( v 50'16 (0'0,50'16] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[10.b( v 50'16 (0'0,50'16] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[10.2( v 50'16 (0'0,50'16] local-lis/les=60/61 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[10.6( v 50'16 (0'0,50'16] local-lis/les=60/61 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[10.19( v 50'16 (0'0,50'16] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[10.1a( v 50'16 (0'0,50'16] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[10.11( v 50'16 (0'0,50'16] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[10.10( v 50'16 (0'0,50'16] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 61 pg[10.13( v 50'16 (0'0,50'16] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [1] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.8( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[8.4( v 46'4 (0'0,46'4] local-lis/les=60/61 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.18( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[8.1b( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.1b( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.1c( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[8.12( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.3( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.1e( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[8.11( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.12( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.2( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.b( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.1a( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.1f( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[11.11( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [2] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 61 pg[8.1c( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [2] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[11.e( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[10.17( v 50'16 (0'0,50'16] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[8.f( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[10.d( v 59'20 lc 59'19 (0'0,59'20] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=59'20 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[10.e( v 59'20 lc 59'19 (0'0,59'20] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=59'20 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[11.f( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[8.c( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[8.e( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[8.9( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[11.1( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[11.19( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[8.18( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[8.1a( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[11.17( empty local-lis/les=60/61 n=0 ec=58/51 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[8.14( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[10.1e( v 50'16 (0'0,50'16] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[10.16( v 50'16 (0'0,50'16] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[8.1d( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[8.1f( v 46'4 (0'0,46'4] local-lis/les=60/61 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 61 pg[10.1( v 50'16 (0'0,50'16] local-lis/les=60/61 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=50'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct 10 23:22:50 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct 10 23:22:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct 10 23:22:51 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 10 23:22:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct 10 23:22:51 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct 10 23:22:51 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct 10 23:22:51 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct 10 23:22:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v155: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s; 15 B/s, 0 objects/s recovering
Oct 10 23:22:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 10 23:22:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 10 23:22:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 10 23:22:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.5( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.9( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.b( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.11( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.d( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.1( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.1d( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.1b( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.3( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct 10 23:22:52 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 62 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[56,61)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:52 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 10 23:22:53 np0005480824 frosty_nash[104577]: {
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "user_id": "openstack",
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "display_name": "openstack",
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "email": "",
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "suspended": 0,
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "max_buckets": 1000,
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "subusers": [],
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "keys": [
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:        {
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:            "user": "openstack",
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:            "access_key": "GUFFEHFPUA1FR4NGMB3O",
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:            "secret_key": "jgJY5ngz9ZLB5edk2BiEZCOmJpIU40koq5lO3frc"
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:        }
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    ],
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "swift_keys": [],
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "caps": [],
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "op_mask": "read, write, delete",
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "default_placement": "",
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "default_storage_class": "",
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "placement_tags": [],
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "bucket_quota": {
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:        "enabled": false,
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:        "check_on_raw": false,
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:        "max_size": -1,
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:        "max_size_kb": 0,
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:        "max_objects": -1
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    },
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "user_quota": {
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:        "enabled": false,
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:        "check_on_raw": false,
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:        "max_size": -1,
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:        "max_size_kb": 0,
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:        "max_objects": -1
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    },
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "temp_url_keys": [],
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "type": "rgw",
Oct 10 23:22:53 np0005480824 frosty_nash[104577]:    "mfa_ids": []
Oct 10 23:22:53 np0005480824 frosty_nash[104577]: }
Oct 10 23:22:53 np0005480824 frosty_nash[104577]: 
Oct 10 23:22:53 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Oct 10 23:22:53 np0005480824 systemd[1]: libpod-cf584a3d10030ddc7ec0d954648eed434a2bf395a961e2c93ef8fb3c0754e0a5.scope: Deactivated successfully.
Oct 10 23:22:53 np0005480824 podman[104562]: 2025-10-11 03:22:53.437113791 +0000 UTC m=+5.577837691 container died cf584a3d10030ddc7ec0d954648eed434a2bf395a961e2c93ef8fb3c0754e0a5 (image=quay.io/ceph/ceph:v18, name=frosty_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:22:53 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Oct 10 23:22:53 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6525d8c817e2aa592366a393395f38aa6635d2e1d56bc776b2669ea70d7fa8b8-merged.mount: Deactivated successfully.
Oct 10 23:22:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:22:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct 10 23:22:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct 10 23:22:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct 10 23:22:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v158: 321 pgs: 16 activating+remapped, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s; 103/245 objects misplaced (42.041%); 127 B/s, 1 objects/s recovering
Oct 10 23:22:53 np0005480824 podman[104562]: 2025-10-11 03:22:53.875002494 +0000 UTC m=+6.015726354 container remove cf584a3d10030ddc7ec0d954648eed434a2bf395a961e2c93ef8fb3c0754e0a5 (image=quay.io/ceph/ceph:v18, name=frosty_nash, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 10 23:22:53 np0005480824 systemd[1]: libpod-conmon-cf584a3d10030ddc7ec0d954648eed434a2bf395a961e2c93ef8fb3c0754e0a5.scope: Deactivated successfully.
Oct 10 23:22:54 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 64 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.650074959s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 153.215072632s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:54 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 64 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.649998665s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.215072632s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:54 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 64 pg[9.b( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.649590492s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 153.215393066s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:54 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 64 pg[9.b( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.649542809s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.215393066s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:54 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 64 pg[9.5( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.648327827s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 153.215087891s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:54 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 64 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.469121933s) [0] async=[0] r=-1 lpr=64 pi=[56,64)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 153.036056519s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:54 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 64 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.468978882s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.036056519s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:54 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 64 pg[9.5( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64 pruub=14.647971153s) [0] r=-1 lpr=64 pi=[56,64)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.215087891s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 64 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 64 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 64 pg[9.5( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 64 pg[9.5( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 64 pg[9.b( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 64 pg[9.b( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 64 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 64 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:54 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 10 23:22:54 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct 10 23:22:54 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct 10 23:22:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct 10 23:22:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct 10 23:22:54 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct 10 23:22:54 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 65 pg[9.9( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=65 pruub=13.772694588s) [0] async=[0] r=-1 lpr=65 pi=[56,65)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 153.215286255s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:54 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 65 pg[9.9( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=65 pruub=13.772576332s) [0] r=-1 lpr=65 pi=[56,65)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.215286255s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 65 pg[9.9( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=65) [0] r=0 lpr=65 pi=[56,65)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 65 pg[9.9( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=65) [0] r=0 lpr=65 pi=[56,65)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 65 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=64/65 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 65 pg[9.b( v 53'585 (0'0,53'585] local-lis/les=64/65 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 65 pg[9.5( v 53'585 (0'0,53'585] local-lis/les=64/65 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 65 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=64/65 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=64) [0] r=0 lpr=64 pi=[56,64)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:55 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct 10 23:22:55 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct 10 23:22:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 16 activating+remapped, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 919 B/s rd, 0 op/s; 103/245 objects misplaced (42.041%); 114 B/s, 1 objects/s recovering
Oct 10 23:22:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct 10 23:22:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct 10 23:22:56 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct 10 23:22:56 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 66 pg[9.11( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=66 pruub=12.263914108s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 153.215454102s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:56 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 66 pg[9.11( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=66 pruub=12.263821602s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.215454102s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:56 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 66 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=66 pruub=12.263510704s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 153.215484619s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:56 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 66 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=66 pruub=12.263441086s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.215484619s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:56 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 66 pg[9.11( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:56 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 66 pg[9.11( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:56 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 66 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:56 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 66 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:56 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 66 pg[9.9( v 53'585 (0'0,53'585] local-lis/les=65/66 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=65) [0] r=0 lpr=65 pi=[56,65)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct 10 23:22:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct 10 23:22:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 1 active+recovering+remapped, 2 peering, 1 active+remapped, 1 activating, 7 active+recovery_wait+remapped, 309 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 592 B/s rd, 0 op/s; 53/247 objects misplaced (21.457%); 178 B/s, 9 objects/s recovering
Oct 10 23:22:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:22:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:22:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:22:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:22:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:22:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:22:57 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct 10 23:22:58 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 67 pg[9.d( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=67 pruub=10.640684128s) [0] async=[0] r=-1 lpr=67 pi=[56,67)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 153.215744019s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:58 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 67 pg[9.d( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=67 pruub=10.640558243s) [0] r=-1 lpr=67 pi=[56,67)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.215744019s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:58 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 67 pg[9.d( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=67) [0] r=0 lpr=67 pi=[56,67)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:58 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 67 pg[9.d( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=67) [0] r=0 lpr=67 pi=[56,67)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:58 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 67 pg[9.11( v 53'585 (0'0,53'585] local-lis/les=66/67 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:58 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 67 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=66/67 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=66) [0] r=0 lpr=66 pi=[56,66)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:22:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct 10 23:22:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct 10 23:22:58 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct 10 23:22:59 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 68 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=68 pruub=9.554499626s) [0] async=[0] r=-1 lpr=68 pi=[56,68)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 153.215850830s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:59 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 68 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=68 pruub=9.554429054s) [0] r=-1 lpr=68 pi=[56,68)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.215850830s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:59 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 68 pg[9.1d( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=68 pruub=9.554466248s) [0] async=[0] r=-1 lpr=68 pi=[56,68)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 153.216003418s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:59 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 68 pg[9.1d( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=68 pruub=9.554305077s) [0] r=-1 lpr=68 pi=[56,68)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.216003418s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:22:59 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 68 pg[9.1d( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=68) [0] r=0 lpr=68 pi=[56,68)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:59 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 68 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=68) [0] r=0 lpr=68 pi=[56,68)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:22:59 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 68 pg[9.1d( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=68) [0] r=0 lpr=68 pi=[56,68)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:59 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 68 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=68) [0] r=0 lpr=68 pi=[56,68)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:22:59 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 68 pg[9.d( v 53'585 (0'0,53'585] local-lis/les=67/68 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=67) [0] r=0 lpr=67 pi=[56,67)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:22:59 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct 10 23:22:59 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct 10 23:22:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v165: 321 pgs: 1 active+recovering+remapped, 2 peering, 1 active+remapped, 1 activating, 7 active+recovery_wait+remapped, 309 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 616 B/s rd, 0 op/s; 53/247 objects misplaced (21.457%); 185 B/s, 10 objects/s recovering
Oct 10 23:23:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct 10 23:23:00 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 10 23:23:00 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 10 23:23:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct 10 23:23:00 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct 10 23:23:00 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 69 pg[9.1( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=69) [0] r=0 lpr=69 pi=[56,69)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:00 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 69 pg[9.1( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=69 pruub=15.870391846s) [0] async=[0] r=-1 lpr=69 pi=[56,69)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 161.216125488s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:00 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 69 pg[9.1( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=69) [0] r=0 lpr=69 pi=[56,69)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:00 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 69 pg[9.1( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=69 pruub=15.870288849s) [0] r=-1 lpr=69 pi=[56,69)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.216125488s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:00 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 69 pg[9.1d( v 53'585 (0'0,53'585] local-lis/les=68/69 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=68) [0] r=0 lpr=68 pi=[56,68)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:00 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 69 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=68/69 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=68) [0] r=0 lpr=68 pi=[56,68)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct 10 23:23:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct 10 23:23:01 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct 10 23:23:01 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 70 pg[9.3( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=70 pruub=14.882644653s) [0] async=[0] r=-1 lpr=70 pi=[56,70)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 161.216217041s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:01 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 70 pg[9.3( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=70 pruub=14.882510185s) [0] r=-1 lpr=70 pi=[56,70)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.216217041s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:01 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 70 pg[9.1b( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=70 pruub=14.881441116s) [0] async=[0] r=-1 lpr=70 pi=[56,70)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 161.216217041s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:01 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 70 pg[9.1b( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=70 pruub=14.881359100s) [0] r=-1 lpr=70 pi=[56,70)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.216217041s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v168: 321 pgs: 1 peering, 5 active+recovery_wait+remapped, 315 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 34/247 objects misplaced (13.765%); 82 B/s, 4 objects/s recovering
Oct 10 23:23:01 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 70 pg[9.3( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=70) [0] r=0 lpr=70 pi=[56,70)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:01 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 70 pg[9.1b( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=70) [0] r=0 lpr=70 pi=[56,70)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:01 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 70 pg[9.3( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=70) [0] r=0 lpr=70 pi=[56,70)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:01 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 70 pg[9.1b( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=70) [0] r=0 lpr=70 pi=[56,70)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:02 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 70 pg[9.1( v 53'585 (0'0,53'585] local-lis/les=69/70 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=69) [0] r=0 lpr=69 pi=[56,69)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:02 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Oct 10 23:23:02 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Oct 10 23:23:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct 10 23:23:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct 10 23:23:02 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct 10 23:23:02 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 71 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=71 pruub=13.959978104s) [0] async=[0] r=-1 lpr=71 pi=[56,71)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 161.216293335s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:02 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 71 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=71 pruub=13.958884239s) [0] r=-1 lpr=71 pi=[56,71)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.216293335s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:02 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 71 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=71) [0] r=0 lpr=71 pi=[56,71)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:02 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 71 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=71) [0] r=0 lpr=71 pi=[56,71)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:02 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 71 pg[9.3( v 53'585 (0'0,53'585] local-lis/les=70/71 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=70) [0] r=0 lpr=70 pi=[56,70)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:02 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 71 pg[9.1b( v 53'585 (0'0,53'585] local-lis/les=70/71 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=70) [0] r=0 lpr=70 pi=[56,70)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct 10 23:23:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct 10 23:23:03 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.f deep-scrub starts
Oct 10 23:23:03 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.f deep-scrub ok
Oct 10 23:23:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v170: 321 pgs: 1 active+recovering+remapped, 1 peering, 4 active+recovery_wait+remapped, 315 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 29/247 objects misplaced (11.741%); 88 B/s, 4 objects/s recovering
Oct 10 23:23:03 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct 10 23:23:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:23:03 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 72 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=72) [0] r=0 lpr=72 pi=[56,72)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:03 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 72 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=72) [0] r=0 lpr=72 pi=[56,72)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:03 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 72 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=72) [0] r=0 lpr=72 pi=[56,72)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:03 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 72 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=72) [0] r=0 lpr=72 pi=[56,72)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:04 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 72 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=72 pruub=12.642883301s) [0] async=[0] r=-1 lpr=72 pi=[56,72)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 161.215286255s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:04 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 72 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=61/62 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=72 pruub=12.642784119s) [0] r=-1 lpr=72 pi=[56,72)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.215286255s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:04 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 72 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=72 pruub=12.644405365s) [0] async=[0] r=-1 lpr=72 pi=[56,72)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 161.216918945s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:04 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 72 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=61/62 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=72 pruub=12.644319534s) [0] r=-1 lpr=72 pi=[56,72)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.216918945s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:04 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 72 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=71/72 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=71) [0] r=0 lpr=71 pi=[56,71)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:04 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct 10 23:23:04 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct 10 23:23:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct 10 23:23:05 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct 10 23:23:05 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct 10 23:23:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct 10 23:23:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v172: 321 pgs: 1 active+recovering+remapped, 1 peering, 4 active+recovery_wait+remapped, 315 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 29/247 objects misplaced (11.741%); 83 B/s, 4 objects/s recovering
Oct 10 23:23:05 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct 10 23:23:05 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 73 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=72/73 n=6 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=72) [0] r=0 lpr=72 pi=[56,72)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:06 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 73 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=72/73 n=7 ec=56/47 lis/c=61/56 les/c/f=62/57/0 sis=72) [0] r=0 lpr=72 pi=[56,72)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:06 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct 10 23:23:06 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct 10 23:23:07 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.18 deep-scrub starts
Oct 10 23:23:07 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.18 deep-scrub ok
Oct 10 23:23:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v174: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 164 B/s, 5 objects/s recovering
Oct 10 23:23:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 10 23:23:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 10 23:23:08 np0005480824 systemd-logind[782]: New session 34 of user zuul.
Oct 10 23:23:08 np0005480824 systemd[1]: Started Session 34 of User zuul.
Oct 10 23:23:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct 10 23:23:08 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 10 23:23:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 10 23:23:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct 10 23:23:08 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct 10 23:23:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:23:09 np0005480824 python3.9[104830]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:23:09 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Oct 10 23:23:09 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Oct 10 23:23:09 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 10 23:23:09 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 10 23:23:09 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 10 23:23:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 146 B/s, 4 objects/s recovering
Oct 10 23:23:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 10 23:23:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 10 23:23:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct 10 23:23:10 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 10 23:23:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 10 23:23:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct 10 23:23:10 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct 10 23:23:10 np0005480824 python3.9[105048]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:23:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:23:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:23:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:23:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:23:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:23:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:23:11 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev d2b92b70-e3d5-4036-8d84-7c049bdb7de2 does not exist
Oct 10 23:23:11 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 84c7318f-795d-43cc-a188-c979c47f94bf does not exist
Oct 10 23:23:11 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev bf0c5b36-22aa-4272-9ccf-c6b20cf68270 does not exist
Oct 10 23:23:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:23:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:23:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:23:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:23:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:23:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:23:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v178: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 146 B/s, 4 objects/s recovering
Oct 10 23:23:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 10 23:23:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 10 23:23:11 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 10 23:23:11 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 10 23:23:12 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 10 23:23:12 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:23:12 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:23:12 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:23:12 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 10 23:23:12 np0005480824 systemd[1]: packagekit.service: Deactivated successfully.
Oct 10 23:23:12 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.1d deep-scrub starts
Oct 10 23:23:12 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.1d deep-scrub ok
Oct 10 23:23:12 np0005480824 podman[105332]: 2025-10-11 03:23:12.28072741 +0000 UTC m=+0.022871584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:23:12 np0005480824 podman[105332]: 2025-10-11 03:23:12.500136568 +0000 UTC m=+0.242280782 container create 8276ea4a9938dc8dc9e18d31942bc64c6870603877fbe244a55d43b622eb7cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brown, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 10 23:23:12 np0005480824 systemd[1]: Started libpod-conmon-8276ea4a9938dc8dc9e18d31942bc64c6870603877fbe244a55d43b622eb7cde.scope.
Oct 10 23:23:12 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:23:12 np0005480824 podman[105332]: 2025-10-11 03:23:12.672497926 +0000 UTC m=+0.414642120 container init 8276ea4a9938dc8dc9e18d31942bc64c6870603877fbe244a55d43b622eb7cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:23:12 np0005480824 podman[105332]: 2025-10-11 03:23:12.684737683 +0000 UTC m=+0.426881897 container start 8276ea4a9938dc8dc9e18d31942bc64c6870603877fbe244a55d43b622eb7cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 23:23:12 np0005480824 loving_brown[105350]: 167 167
Oct 10 23:23:12 np0005480824 systemd[1]: libpod-8276ea4a9938dc8dc9e18d31942bc64c6870603877fbe244a55d43b622eb7cde.scope: Deactivated successfully.
Oct 10 23:23:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct 10 23:23:12 np0005480824 podman[105332]: 2025-10-11 03:23:12.708222141 +0000 UTC m=+0.450366335 container attach 8276ea4a9938dc8dc9e18d31942bc64c6870603877fbe244a55d43b622eb7cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:23:12 np0005480824 podman[105332]: 2025-10-11 03:23:12.709713669 +0000 UTC m=+0.451857883 container died 8276ea4a9938dc8dc9e18d31942bc64c6870603877fbe244a55d43b622eb7cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 10 23:23:12 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 10 23:23:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct 10 23:23:12 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct 10 23:23:12 np0005480824 systemd[1]: var-lib-containers-storage-overlay-1e8ce44c24981a9ea1730de597717c52f18a98579a16d6373d5856dd913c7e76-merged.mount: Deactivated successfully.
Oct 10 23:23:12 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct 10 23:23:12 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct 10 23:23:13 np0005480824 podman[105332]: 2025-10-11 03:23:13.065798321 +0000 UTC m=+0.807942495 container remove 8276ea4a9938dc8dc9e18d31942bc64c6870603877fbe244a55d43b622eb7cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:23:13 np0005480824 systemd[1]: libpod-conmon-8276ea4a9938dc8dc9e18d31942bc64c6870603877fbe244a55d43b622eb7cde.scope: Deactivated successfully.
Oct 10 23:23:13 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 10 23:23:13 np0005480824 podman[105376]: 2025-10-11 03:23:13.251274188 +0000 UTC m=+0.085747249 container create 56c6d0abb5dbe7ad2cfa6ad46cb6cbc44f4772e1510e31ba31e43814fd84bc78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rhodes, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 10 23:23:13 np0005480824 podman[105376]: 2025-10-11 03:23:13.192206038 +0000 UTC m=+0.026679079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:23:13 np0005480824 systemd[1]: Started libpod-conmon-56c6d0abb5dbe7ad2cfa6ad46cb6cbc44f4772e1510e31ba31e43814fd84bc78.scope.
Oct 10 23:23:13 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:23:13 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959f2eb23ef9f4c2f302325bbe3871a1820223579e3a615514245aa9caf0f056/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:23:13 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959f2eb23ef9f4c2f302325bbe3871a1820223579e3a615514245aa9caf0f056/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:23:13 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959f2eb23ef9f4c2f302325bbe3871a1820223579e3a615514245aa9caf0f056/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:23:13 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959f2eb23ef9f4c2f302325bbe3871a1820223579e3a615514245aa9caf0f056/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:23:13 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959f2eb23ef9f4c2f302325bbe3871a1820223579e3a615514245aa9caf0f056/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:23:13 np0005480824 podman[105376]: 2025-10-11 03:23:13.425890553 +0000 UTC m=+0.260363604 container init 56c6d0abb5dbe7ad2cfa6ad46cb6cbc44f4772e1510e31ba31e43814fd84bc78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:23:13 np0005480824 podman[105376]: 2025-10-11 03:23:13.432808537 +0000 UTC m=+0.267281568 container start 56c6d0abb5dbe7ad2cfa6ad46cb6cbc44f4772e1510e31ba31e43814fd84bc78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rhodes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:23:13 np0005480824 podman[105376]: 2025-10-11 03:23:13.43617346 +0000 UTC m=+0.270646491 container attach 56c6d0abb5dbe7ad2cfa6ad46cb6cbc44f4772e1510e31ba31e43814fd84bc78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 23:23:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v180: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:23:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 10 23:23:13 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 10 23:23:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:23:13 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 76 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76 pruub=11.747881889s) [2] r=-1 lpr=76 pi=[56,76)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 170.183074951s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:13 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 76 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76 pruub=11.747839928s) [2] r=-1 lpr=76 pi=[56,76)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 170.183074951s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:13 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 76 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76) [2] r=0 lpr=76 pi=[56,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:13 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 76 pg[9.e( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76 pruub=11.747801781s) [2] r=-1 lpr=76 pi=[56,76)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 170.183578491s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:13 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 76 pg[9.e( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76 pruub=11.747752190s) [2] r=-1 lpr=76 pi=[56,76)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 170.183578491s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:13 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 76 pg[9.6( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76 pruub=11.747783661s) [2] r=-1 lpr=76 pi=[56,76)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 170.183670044s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:13 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 76 pg[9.6( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76 pruub=11.747764587s) [2] r=-1 lpr=76 pi=[56,76)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 170.183670044s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:13 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 76 pg[9.e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76) [2] r=0 lpr=76 pi=[56,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:13 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 76 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76 pruub=11.747715950s) [2] r=-1 lpr=76 pi=[56,76)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 170.184112549s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:13 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 76 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76 pruub=11.747689247s) [2] r=-1 lpr=76 pi=[56,76)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 170.184112549s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:13 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 76 pg[9.6( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76) [2] r=0 lpr=76 pi=[56,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:13 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 76 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76) [2] r=0 lpr=76 pi=[56,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct 10 23:23:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 10 23:23:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct 10 23:23:14 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct 10 23:23:14 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 77 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=0 lpr=77 pi=[56,77)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:14 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 77 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=0 lpr=77 pi=[56,77)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:14 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 77 pg[9.e( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=0 lpr=77 pi=[56,77)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:14 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 77 pg[9.6( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=0 lpr=77 pi=[56,77)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:14 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 77 pg[9.6( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=0 lpr=77 pi=[56,77)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:14 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 77 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=0 lpr=77 pi=[56,77)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:14 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 77 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=0 lpr=77 pi=[56,77)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:14 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 77 pg[9.e( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=0 lpr=77 pi=[56,77)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:14 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 77 pg[9.6( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[56,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:14 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 77 pg[9.e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[56,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:14 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 77 pg[9.e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[56,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:14 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 77 pg[9.6( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[56,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:14 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 77 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[56,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:14 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 10 23:23:14 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 77 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[56,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:14 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 77 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[56,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:14 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 77 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[56,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:14 np0005480824 musing_rhodes[105393]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:23:14 np0005480824 musing_rhodes[105393]: --> relative data size: 1.0
Oct 10 23:23:14 np0005480824 musing_rhodes[105393]: --> All data devices are unavailable
Oct 10 23:23:14 np0005480824 systemd[1]: libpod-56c6d0abb5dbe7ad2cfa6ad46cb6cbc44f4772e1510e31ba31e43814fd84bc78.scope: Deactivated successfully.
Oct 10 23:23:14 np0005480824 systemd[1]: libpod-56c6d0abb5dbe7ad2cfa6ad46cb6cbc44f4772e1510e31ba31e43814fd84bc78.scope: Consumed 1.008s CPU time.
Oct 10 23:23:14 np0005480824 podman[105376]: 2025-10-11 03:23:14.513034032 +0000 UTC m=+1.347507053 container died 56c6d0abb5dbe7ad2cfa6ad46cb6cbc44f4772e1510e31ba31e43814fd84bc78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 10 23:23:14 np0005480824 systemd[1]: var-lib-containers-storage-overlay-959f2eb23ef9f4c2f302325bbe3871a1820223579e3a615514245aa9caf0f056-merged.mount: Deactivated successfully.
Oct 10 23:23:14 np0005480824 podman[105376]: 2025-10-11 03:23:14.602660707 +0000 UTC m=+1.437133728 container remove 56c6d0abb5dbe7ad2cfa6ad46cb6cbc44f4772e1510e31ba31e43814fd84bc78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rhodes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:23:14 np0005480824 systemd[1]: libpod-conmon-56c6d0abb5dbe7ad2cfa6ad46cb6cbc44f4772e1510e31ba31e43814fd84bc78.scope: Deactivated successfully.
Oct 10 23:23:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct 10 23:23:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct 10 23:23:15 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct 10 23:23:15 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 10 23:23:15 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 78 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=77/78 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[56,77)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:15 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 78 pg[9.e( v 53'585 (0'0,53'585] local-lis/les=77/78 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[56,77)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:15 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 78 pg[9.6( v 53'585 (0'0,53'585] local-lis/les=77/78 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[56,77)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:15 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 78 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=77/78 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[56,77)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:15 np0005480824 podman[105587]: 2025-10-11 03:23:15.14194086 +0000 UTC m=+0.050356243 container create 243f5b00e985ffffd1a4fbe87f819603796e12b80b62625c1e7cedbdba71fd48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_torvalds, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:23:15 np0005480824 systemd[1]: Started libpod-conmon-243f5b00e985ffffd1a4fbe87f819603796e12b80b62625c1e7cedbdba71fd48.scope.
Oct 10 23:23:15 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:23:15 np0005480824 podman[105587]: 2025-10-11 03:23:15.110655996 +0000 UTC m=+0.019071399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:23:15 np0005480824 podman[105587]: 2025-10-11 03:23:15.207200625 +0000 UTC m=+0.115616028 container init 243f5b00e985ffffd1a4fbe87f819603796e12b80b62625c1e7cedbdba71fd48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_torvalds, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 23:23:15 np0005480824 podman[105587]: 2025-10-11 03:23:15.215330819 +0000 UTC m=+0.123746202 container start 243f5b00e985ffffd1a4fbe87f819603796e12b80b62625c1e7cedbdba71fd48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:23:15 np0005480824 podman[105587]: 2025-10-11 03:23:15.218860858 +0000 UTC m=+0.127276251 container attach 243f5b00e985ffffd1a4fbe87f819603796e12b80b62625c1e7cedbdba71fd48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:23:15 np0005480824 clever_torvalds[105603]: 167 167
Oct 10 23:23:15 np0005480824 systemd[1]: libpod-243f5b00e985ffffd1a4fbe87f819603796e12b80b62625c1e7cedbdba71fd48.scope: Deactivated successfully.
Oct 10 23:23:15 np0005480824 podman[105587]: 2025-10-11 03:23:15.221207276 +0000 UTC m=+0.129622659 container died 243f5b00e985ffffd1a4fbe87f819603796e12b80b62625c1e7cedbdba71fd48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 23:23:15 np0005480824 systemd[1]: var-lib-containers-storage-overlay-f52efbae13eea08522f4e948fd49ebc46c6631a638a69d6e99bd56b73ba0fd3b-merged.mount: Deactivated successfully.
Oct 10 23:23:15 np0005480824 podman[105587]: 2025-10-11 03:23:15.259532327 +0000 UTC m=+0.167947720 container remove 243f5b00e985ffffd1a4fbe87f819603796e12b80b62625c1e7cedbdba71fd48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 10 23:23:15 np0005480824 systemd[1]: libpod-conmon-243f5b00e985ffffd1a4fbe87f819603796e12b80b62625c1e7cedbdba71fd48.scope: Deactivated successfully.
Oct 10 23:23:15 np0005480824 podman[105627]: 2025-10-11 03:23:15.443191628 +0000 UTC m=+0.054194379 container create da45ac745369e24f82877269917ca854fcc38bfc3aeaa0a021fc321f93212d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:23:15 np0005480824 systemd[1]: Started libpod-conmon-da45ac745369e24f82877269917ca854fcc38bfc3aeaa0a021fc321f93212d01.scope.
Oct 10 23:23:15 np0005480824 podman[105627]: 2025-10-11 03:23:15.422872278 +0000 UTC m=+0.033875069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:23:15 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:23:15 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c564ddedb7ccfc59cbf5e1c02a76b3ad3c8fce4aa711190cb36fd4afc1a56f14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:23:15 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c564ddedb7ccfc59cbf5e1c02a76b3ad3c8fce4aa711190cb36fd4afc1a56f14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:23:15 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c564ddedb7ccfc59cbf5e1c02a76b3ad3c8fce4aa711190cb36fd4afc1a56f14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:23:15 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c564ddedb7ccfc59cbf5e1c02a76b3ad3c8fce4aa711190cb36fd4afc1a56f14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:23:15 np0005480824 podman[105627]: 2025-10-11 03:23:15.549029019 +0000 UTC m=+0.160031860 container init da45ac745369e24f82877269917ca854fcc38bfc3aeaa0a021fc321f93212d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:23:15 np0005480824 podman[105627]: 2025-10-11 03:23:15.561965634 +0000 UTC m=+0.172968415 container start da45ac745369e24f82877269917ca854fcc38bfc3aeaa0a021fc321f93212d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:23:15 np0005480824 podman[105627]: 2025-10-11 03:23:15.566434116 +0000 UTC m=+0.177436897 container attach da45ac745369e24f82877269917ca854fcc38bfc3aeaa0a021fc321f93212d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:23:15 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Oct 10 23:23:15 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Oct 10 23:23:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v183: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:23:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 10 23:23:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 10 23:23:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct 10 23:23:16 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 10 23:23:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 10 23:23:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct 10 23:23:16 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct 10 23:23:16 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 79 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=77/78 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79 pruub=14.989492416s) [2] async=[2] r=-1 lpr=79 pi=[56,79)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 175.702606201s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:16 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 79 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=77/78 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79 pruub=14.989424706s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.702606201s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:16 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 79 pg[9.8( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79 pruub=9.469306946s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 170.183563232s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:16 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 79 pg[9.e( v 53'585 (0'0,53'585] local-lis/les=77/78 n=7 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79 pruub=14.988253593s) [2] async=[2] r=-1 lpr=79 pi=[56,79)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 175.702514648s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:16 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 79 pg[9.8( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79 pruub=9.469264030s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 170.183563232s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:16 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 79 pg[9.6( v 53'585 (0'0,53'585] local-lis/les=77/78 n=7 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79 pruub=14.988197327s) [2] async=[2] r=-1 lpr=79 pi=[56,79)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 175.702529907s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:16 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 79 pg[9.6( v 53'585 (0'0,53'585] local-lis/les=77/78 n=7 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79 pruub=14.988136292s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.702529907s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:16 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 79 pg[9.e( v 53'585 (0'0,53'585] local-lis/les=77/78 n=7 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79 pruub=14.988123894s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.702514648s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:16 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 79 pg[9.18( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79 pruub=9.469278336s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 170.183929443s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:16 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 79 pg[9.18( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79 pruub=9.469258308s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 170.183929443s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:16 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 79 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=77/78 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79 pruub=14.982062340s) [2] async=[2] r=-1 lpr=79 pi=[56,79)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 175.696792603s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:16 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 79 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=77/78 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79 pruub=14.982022285s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.696792603s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:16 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 77 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=68/69 n=6 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=77 pruub=8.808925629s) [2] r=-1 lpr=77 pi=[68,77)/1 crt=53'585 mlcod 0'0 active pruub 175.028396606s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:16 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 79 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=68/69 n=6 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=77 pruub=8.808854103s) [2] r=-1 lpr=77 pi=[68,77)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 175.028396606s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:16 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 77 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=66/67 n=7 ec=56/47 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=14.138860703s) [2] r=-1 lpr=77 pi=[66,77)/1 crt=53'585 mlcod 0'0 active pruub 180.358886719s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:16 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 79 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=66/67 n=7 ec=56/47 lis/c=66/66 les/c/f=67/67/0 sis=77 pruub=14.138750076s) [2] r=-1 lpr=77 pi=[66,77)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 180.358886719s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:16 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 77 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=64/65 n=6 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=77 pruub=10.742995262s) [2] r=-1 lpr=77 pi=[64,77)/1 crt=53'585 mlcod 0'0 active pruub 176.963653564s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:16 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 79 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=64/65 n=6 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=77 pruub=10.742940903s) [2] r=-1 lpr=77 pi=[64,77)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 176.963653564s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:16 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 77 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=72/73 n=7 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=77 pruub=13.843365669s) [2] r=-1 lpr=77 pi=[72,77)/1 crt=53'585 mlcod 0'0 active pruub 180.064376831s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:16 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 79 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=72/73 n=7 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=77 pruub=13.842973709s) [2] r=-1 lpr=77 pi=[72,77)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 180.064376831s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:16 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 79 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=77) [2] r=0 lpr=79 pi=[64,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:16 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 79 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=66/66 les/c/f=67/67/0 sis=77) [2] r=0 lpr=79 pi=[66,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:16 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 79 pg[9.6( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:16 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 79 pg[9.6( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:16 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 79 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:16 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 79 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:16 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 79 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=77) [2] r=0 lpr=79 pi=[68,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:16 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 79 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:16 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 79 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:16 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 79 pg[9.18( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:16 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 79 pg[9.e( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:16 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 79 pg[9.e( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:16 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 79 pg[9.8( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:16 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 79 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=77) [2] r=0 lpr=79 pi=[72,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]: {
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:    "0": [
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:        {
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "devices": [
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "/dev/loop3"
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            ],
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_name": "ceph_lv0",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_size": "21470642176",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "name": "ceph_lv0",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "tags": {
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.cluster_name": "ceph",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.crush_device_class": "",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.encrypted": "0",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.osd_id": "0",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.type": "block",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.vdo": "0"
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            },
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "type": "block",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "vg_name": "ceph_vg0"
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:        }
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:    ],
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:    "1": [
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:        {
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "devices": [
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "/dev/loop4"
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            ],
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_name": "ceph_lv1",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_size": "21470642176",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "name": "ceph_lv1",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "tags": {
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.cluster_name": "ceph",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.crush_device_class": "",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.encrypted": "0",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.osd_id": "1",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.type": "block",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.vdo": "0"
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            },
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "type": "block",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "vg_name": "ceph_vg1"
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:        }
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:    ],
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:    "2": [
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:        {
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "devices": [
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "/dev/loop5"
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            ],
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_name": "ceph_lv2",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_size": "21470642176",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "name": "ceph_lv2",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "tags": {
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.cluster_name": "ceph",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.crush_device_class": "",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.encrypted": "0",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.osd_id": "2",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.type": "block",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:                "ceph.vdo": "0"
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            },
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "type": "block",
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:            "vg_name": "ceph_vg2"
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:        }
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]:    ]
Oct 10 23:23:16 np0005480824 eager_wescoff[105644]: }
Oct 10 23:23:16 np0005480824 systemd[1]: libpod-da45ac745369e24f82877269917ca854fcc38bfc3aeaa0a021fc321f93212d01.scope: Deactivated successfully.
Oct 10 23:23:16 np0005480824 podman[105627]: 2025-10-11 03:23:16.323012072 +0000 UTC m=+0.934014843 container died da45ac745369e24f82877269917ca854fcc38bfc3aeaa0a021fc321f93212d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:23:16 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c564ddedb7ccfc59cbf5e1c02a76b3ad3c8fce4aa711190cb36fd4afc1a56f14-merged.mount: Deactivated successfully.
Oct 10 23:23:16 np0005480824 podman[105627]: 2025-10-11 03:23:16.383432436 +0000 UTC m=+0.994435177 container remove da45ac745369e24f82877269917ca854fcc38bfc3aeaa0a021fc321f93212d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:23:16 np0005480824 systemd[1]: libpod-conmon-da45ac745369e24f82877269917ca854fcc38bfc3aeaa0a021fc321f93212d01.scope: Deactivated successfully.
Oct 10 23:23:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct 10 23:23:17 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 10 23:23:17 np0005480824 podman[105814]: 2025-10-11 03:23:17.174093177 +0000 UTC m=+0.085344480 container create b7e022ce015c3baa46c481adaed22c24c05356efb6c2755b2f8dac41ebf7499a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 10 23:23:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct 10 23:23:17 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct 10 23:23:17 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 80 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=68/69 n=6 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=80) [2]/[0] r=0 lpr=80 pi=[68,80)/1 crt=53'585 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:17 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 80 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=68/69 n=6 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=80) [2]/[0] r=0 lpr=80 pi=[68,80)/1 crt=53'585 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:17 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 80 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=66/67 n=7 ec=56/47 lis/c=66/66 les/c/f=67/67/0 sis=80) [2]/[0] r=0 lpr=80 pi=[66,80)/1 crt=53'585 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:17 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 80 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=66/67 n=7 ec=56/47 lis/c=66/66 les/c/f=67/67/0 sis=80) [2]/[0] r=0 lpr=80 pi=[66,80)/1 crt=53'585 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:17 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 80 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=64/65 n=6 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=80) [2]/[0] r=0 lpr=80 pi=[64,80)/1 crt=53'585 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:17 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 80 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=72/73 n=7 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=80) [2]/[0] r=0 lpr=80 pi=[72,80)/1 crt=53'585 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:17 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 80 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=72/73 n=7 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=80) [2]/[0] r=0 lpr=80 pi=[72,80)/1 crt=53'585 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:17 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 80 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=64/65 n=6 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=80) [2]/[0] r=0 lpr=80 pi=[64,80)/1 crt=53'585 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[68,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.18( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80) [2]/[1] r=-1 lpr=80 pi=[56,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=66/66 les/c/f=67/67/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[66,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=66/66 les/c/f=67/67/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[66,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.8( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80) [2]/[1] r=-1 lpr=80 pi=[56,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.8( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80) [2]/[1] r=-1 lpr=80 pi=[56,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[72,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.7( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[72,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[64,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.18( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80) [2]/[1] r=-1 lpr=80 pi=[56,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.17( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[64,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[68,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:17 np0005480824 podman[105814]: 2025-10-11 03:23:17.127077358 +0000 UTC m=+0.038328751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:23:17 np0005480824 systemd[1]: Started libpod-conmon-b7e022ce015c3baa46c481adaed22c24c05356efb6c2755b2f8dac41ebf7499a.scope.
Oct 10 23:23:17 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:23:17 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 80 pg[9.8( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80) [2]/[1] r=0 lpr=80 pi=[56,80)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:17 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 80 pg[9.8( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80) [2]/[1] r=0 lpr=80 pi=[56,80)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:17 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 80 pg[9.18( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80) [2]/[1] r=0 lpr=80 pi=[56,80)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:17 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 80 pg[9.18( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80) [2]/[1] r=0 lpr=80 pi=[56,80)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.e( v 53'585 (0'0,53'585] local-lis/les=79/80 n=7 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=79/80 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.6( v 53'585 (0'0,53'585] local-lis/les=79/80 n=7 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:17 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 80 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=79/80 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79) [2] r=0 lpr=79 pi=[56,79)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:17 np0005480824 podman[105814]: 2025-10-11 03:23:17.292988316 +0000 UTC m=+0.204239699 container init b7e022ce015c3baa46c481adaed22c24c05356efb6c2755b2f8dac41ebf7499a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sanderson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:23:17 np0005480824 podman[105814]: 2025-10-11 03:23:17.306480934 +0000 UTC m=+0.217732277 container start b7e022ce015c3baa46c481adaed22c24c05356efb6c2755b2f8dac41ebf7499a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sanderson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:23:17 np0005480824 affectionate_sanderson[105832]: 167 167
Oct 10 23:23:17 np0005480824 systemd[1]: libpod-b7e022ce015c3baa46c481adaed22c24c05356efb6c2755b2f8dac41ebf7499a.scope: Deactivated successfully.
Oct 10 23:23:17 np0005480824 podman[105814]: 2025-10-11 03:23:17.31511016 +0000 UTC m=+0.226361543 container attach b7e022ce015c3baa46c481adaed22c24c05356efb6c2755b2f8dac41ebf7499a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sanderson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:23:17 np0005480824 podman[105814]: 2025-10-11 03:23:17.318131926 +0000 UTC m=+0.229383259 container died b7e022ce015c3baa46c481adaed22c24c05356efb6c2755b2f8dac41ebf7499a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sanderson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:23:17 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct 10 23:23:17 np0005480824 systemd[1]: var-lib-containers-storage-overlay-eff6ef3296d7915cda9e3b1461c911bf9ec055c38ebcac05099d9d86a283a677-merged.mount: Deactivated successfully.
Oct 10 23:23:17 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct 10 23:23:17 np0005480824 podman[105814]: 2025-10-11 03:23:17.378639352 +0000 UTC m=+0.289890685 container remove b7e022ce015c3baa46c481adaed22c24c05356efb6c2755b2f8dac41ebf7499a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sanderson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 10 23:23:17 np0005480824 systemd[1]: libpod-conmon-b7e022ce015c3baa46c481adaed22c24c05356efb6c2755b2f8dac41ebf7499a.scope: Deactivated successfully.
Oct 10 23:23:17 np0005480824 podman[105856]: 2025-10-11 03:23:17.644682618 +0000 UTC m=+0.084569760 container create 1be161bad9af529c38a493e36aefaf97dd926cf5f3e3963bc8d92f41a8a4d4a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 10 23:23:17 np0005480824 podman[105856]: 2025-10-11 03:23:17.611810554 +0000 UTC m=+0.051697726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:23:17 np0005480824 systemd[1]: Started libpod-conmon-1be161bad9af529c38a493e36aefaf97dd926cf5f3e3963bc8d92f41a8a4d4a5.scope.
Oct 10 23:23:17 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:23:17 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd6c8594a7a171f1ae9cb5c7fede9fa8eec888a6c279a8ac8d3f255e9d93976/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:23:17 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd6c8594a7a171f1ae9cb5c7fede9fa8eec888a6c279a8ac8d3f255e9d93976/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:23:17 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd6c8594a7a171f1ae9cb5c7fede9fa8eec888a6c279a8ac8d3f255e9d93976/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:23:17 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd6c8594a7a171f1ae9cb5c7fede9fa8eec888a6c279a8ac8d3f255e9d93976/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:23:17 np0005480824 podman[105856]: 2025-10-11 03:23:17.763073414 +0000 UTC m=+0.202960586 container init 1be161bad9af529c38a493e36aefaf97dd926cf5f3e3963bc8d92f41a8a4d4a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_knuth, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 10 23:23:17 np0005480824 podman[105856]: 2025-10-11 03:23:17.772459369 +0000 UTC m=+0.212346531 container start 1be161bad9af529c38a493e36aefaf97dd926cf5f3e3963bc8d92f41a8a4d4a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:23:17 np0005480824 podman[105856]: 2025-10-11 03:23:17.780107301 +0000 UTC m=+0.219994473 container attach 1be161bad9af529c38a493e36aefaf97dd926cf5f3e3963bc8d92f41a8a4d4a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_knuth, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 23:23:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v186: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 5 objects/s recovering
Oct 10 23:23:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 10 23:23:17 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 10 23:23:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct 10 23:23:18 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 10 23:23:18 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 10 23:23:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct 10 23:23:18 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct 10 23:23:18 np0005480824 systemd[1]: session-34.scope: Deactivated successfully.
Oct 10 23:23:18 np0005480824 systemd[1]: session-34.scope: Consumed 8.497s CPU time.
Oct 10 23:23:18 np0005480824 systemd-logind[782]: Session 34 logged out. Waiting for processes to exit.
Oct 10 23:23:18 np0005480824 systemd-logind[782]: Removed session 34.
Oct 10 23:23:18 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct 10 23:23:18 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct 10 23:23:18 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 81 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=80/81 n=6 ec=56/47 lis/c=68/68 les/c/f=69/69/0 sis=80) [2]/[0] async=[2] r=0 lpr=80 pi=[68,80)/1 crt=53'585 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:18 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 81 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=80/81 n=7 ec=56/47 lis/c=66/66 les/c/f=67/67/0 sis=80) [2]/[0] async=[2] r=0 lpr=80 pi=[66,80)/1 crt=53'585 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:18 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 81 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=80/81 n=7 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=80) [2]/[0] async=[2] r=0 lpr=80 pi=[72,80)/1 crt=53'585 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:18 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 81 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=80/81 n=6 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=80) [2]/[0] async=[2] r=0 lpr=80 pi=[64,80)/1 crt=53'585 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:18 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 81 pg[9.18( v 53'585 (0'0,53'585] local-lis/les=80/81 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80) [2]/[1] async=[2] r=0 lpr=80 pi=[56,80)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:18 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 81 pg[9.8( v 53'585 (0'0,53'585] local-lis/les=80/81 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80) [2]/[1] async=[2] r=0 lpr=80 pi=[56,80)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]: {
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "osd_id": 0,
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "type": "bluestore"
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:    },
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "osd_id": 1,
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "type": "bluestore"
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:    },
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "osd_id": 2,
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:        "type": "bluestore"
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]:    }
Oct 10 23:23:18 np0005480824 adoring_knuth[105873]: }
Oct 10 23:23:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:23:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct 10 23:23:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct 10 23:23:18 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct 10 23:23:18 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 82 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=80/81 n=6 ec=56/47 lis/c=80/68 les/c/f=81/69/0 sis=82 pruub=15.757507324s) [2] async=[2] r=-1 lpr=82 pi=[68,82)/1 crt=53'585 mlcod 53'585 active pruub 184.704681396s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:18 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 82 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=80/81 n=7 ec=56/47 lis/c=80/66 les/c/f=81/67/0 sis=82 pruub=15.757213593s) [2] async=[2] r=-1 lpr=82 pi=[66,82)/1 crt=53'585 mlcod 53'585 active pruub 184.704757690s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:18 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 82 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=80/81 n=7 ec=56/47 lis/c=80/66 les/c/f=81/67/0 sis=82 pruub=15.756521225s) [2] r=-1 lpr=82 pi=[66,82)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 184.704757690s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:18 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 82 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=80/81 n=6 ec=56/47 lis/c=80/68 les/c/f=81/69/0 sis=82 pruub=15.756466866s) [2] r=-1 lpr=82 pi=[68,82)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 184.704681396s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:18 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 82 pg[9.18( v 53'585 (0'0,53'585] local-lis/les=80/81 n=6 ec=56/47 lis/c=80/56 les/c/f=81/57/0 sis=82 pruub=15.764191628s) [2] async=[2] r=-1 lpr=82 pi=[56,82)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 179.214477539s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:18 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 82 pg[9.18( v 53'585 (0'0,53'585] local-lis/les=80/81 n=6 ec=56/47 lis/c=80/56 les/c/f=81/57/0 sis=82 pruub=15.764066696s) [2] r=-1 lpr=82 pi=[56,82)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.214477539s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:18 np0005480824 systemd[1]: libpod-1be161bad9af529c38a493e36aefaf97dd926cf5f3e3963bc8d92f41a8a4d4a5.scope: Deactivated successfully.
Oct 10 23:23:18 np0005480824 systemd[1]: libpod-1be161bad9af529c38a493e36aefaf97dd926cf5f3e3963bc8d92f41a8a4d4a5.scope: Consumed 1.103s CPU time.
Oct 10 23:23:18 np0005480824 podman[105856]: 2025-10-11 03:23:18.893122408 +0000 UTC m=+1.333009580 container died 1be161bad9af529c38a493e36aefaf97dd926cf5f3e3963bc8d92f41a8a4d4a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_knuth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:23:18 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 82 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=80/66 les/c/f=81/67/0 sis=82) [2] r=0 lpr=82 pi=[66,82)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:18 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 82 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=80/68 les/c/f=81/69/0 sis=82) [2] r=0 lpr=82 pi=[68,82)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:18 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 82 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=80/66 les/c/f=81/67/0 sis=82) [2] r=0 lpr=82 pi=[66,82)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:18 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 82 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=80/68 les/c/f=81/69/0 sis=82) [2] r=0 lpr=82 pi=[68,82)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:18 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 82 pg[9.18( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=80/56 les/c/f=81/57/0 sis=82) [2] r=0 lpr=82 pi=[56,82)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:18 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 82 pg[9.18( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=80/56 les/c/f=81/57/0 sis=82) [2] r=0 lpr=82 pi=[56,82)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:18 np0005480824 systemd[1]: var-lib-containers-storage-overlay-bdd6c8594a7a171f1ae9cb5c7fede9fa8eec888a6c279a8ac8d3f255e9d93976-merged.mount: Deactivated successfully.
Oct 10 23:23:18 np0005480824 podman[105856]: 2025-10-11 03:23:18.979559104 +0000 UTC m=+1.419446246 container remove 1be161bad9af529c38a493e36aefaf97dd926cf5f3e3963bc8d92f41a8a4d4a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_knuth, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 10 23:23:18 np0005480824 systemd[1]: libpod-conmon-1be161bad9af529c38a493e36aefaf97dd926cf5f3e3963bc8d92f41a8a4d4a5.scope: Deactivated successfully.
Oct 10 23:23:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:23:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:23:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:23:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:23:19 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev a5fc952e-ed1e-4337-82e1-40d1099b0880 does not exist
Oct 10 23:23:19 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 3adc2d61-1e50-4791-a4c5-b51032fce112 does not exist
Oct 10 23:23:19 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 10 23:23:19 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:23:19 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:23:19 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct 10 23:23:19 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct 10 23:23:19 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct 10 23:23:19 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct 10 23:23:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v189: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 5 objects/s recovering
Oct 10 23:23:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 10 23:23:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 10 23:23:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct 10 23:23:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 10 23:23:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct 10 23:23:19 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct 10 23:23:19 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 83 pg[9.8( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=80/56 les/c/f=81/57/0 sis=83) [2] r=0 lpr=83 pi=[56,83)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:19 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 83 pg[9.8( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=80/56 les/c/f=81/57/0 sis=83) [2] r=0 lpr=83 pi=[56,83)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:19 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 83 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=80/72 les/c/f=81/73/0 sis=83) [2] r=0 lpr=83 pi=[72,83)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:19 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 83 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=80/72 les/c/f=81/73/0 sis=83) [2] r=0 lpr=83 pi=[72,83)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:19 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 83 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=80/64 les/c/f=81/65/0 sis=83) [2] r=0 lpr=83 pi=[64,83)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:19 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 83 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=80/64 les/c/f=81/65/0 sis=83) [2] r=0 lpr=83 pi=[64,83)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:19 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 83 pg[9.8( v 53'585 (0'0,53'585] local-lis/les=80/81 n=7 ec=56/47 lis/c=80/56 les/c/f=81/57/0 sis=83 pruub=14.699091911s) [2] async=[2] r=-1 lpr=83 pi=[56,83)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 179.214645386s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:19 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 83 pg[9.8( v 53'585 (0'0,53'585] local-lis/les=80/81 n=7 ec=56/47 lis/c=80/56 les/c/f=81/57/0 sis=83 pruub=14.699004173s) [2] r=-1 lpr=83 pi=[56,83)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.214645386s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:19 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 83 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=80/81 n=7 ec=56/47 lis/c=80/72 les/c/f=81/73/0 sis=83 pruub=14.695899963s) [2] async=[2] r=-1 lpr=83 pi=[72,83)/1 crt=53'585 mlcod 53'585 active pruub 184.714035034s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:19 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 83 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=80/81 n=6 ec=56/47 lis/c=80/64 les/c/f=81/65/0 sis=83 pruub=14.695819855s) [2] async=[2] r=-1 lpr=83 pi=[64,83)/1 crt=53'585 mlcod 53'585 active pruub 184.714080811s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:19 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 83 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=80/81 n=6 ec=56/47 lis/c=80/64 les/c/f=81/65/0 sis=83 pruub=14.695691109s) [2] r=-1 lpr=83 pi=[64,83)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 184.714080811s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:19 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 83 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=80/81 n=7 ec=56/47 lis/c=80/72 les/c/f=81/73/0 sis=83 pruub=14.695582390s) [2] r=-1 lpr=83 pi=[72,83)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 184.714035034s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:19 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 83 pg[9.18( v 53'585 (0'0,53'585] local-lis/les=82/83 n=6 ec=56/47 lis/c=80/56 les/c/f=81/57/0 sis=82) [2] r=0 lpr=82 pi=[56,82)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:19 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 83 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=82/83 n=6 ec=56/47 lis/c=80/68 les/c/f=81/69/0 sis=82) [2] r=0 lpr=82 pi=[68,82)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:19 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 83 pg[9.f( v 53'585 (0'0,53'585] local-lis/les=82/83 n=7 ec=56/47 lis/c=80/66 les/c/f=81/67/0 sis=82) [2] r=0 lpr=82 pi=[66,82)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:20 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 10 23:23:20 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 10 23:23:20 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct 10 23:23:20 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct 10 23:23:20 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct 10 23:23:20 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct 10 23:23:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct 10 23:23:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct 10 23:23:20 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct 10 23:23:20 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 84 pg[9.8( v 53'585 (0'0,53'585] local-lis/les=83/84 n=7 ec=56/47 lis/c=80/56 les/c/f=81/57/0 sis=83) [2] r=0 lpr=83 pi=[56,83)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:20 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 84 pg[9.7( v 53'585 (0'0,53'585] local-lis/les=83/84 n=7 ec=56/47 lis/c=80/72 les/c/f=81/73/0 sis=83) [2] r=0 lpr=83 pi=[72,83)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:20 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 84 pg[9.17( v 53'585 (0'0,53'585] local-lis/les=83/84 n=6 ec=56/47 lis/c=80/64 les/c/f=81/65/0 sis=83) [2] r=0 lpr=83 pi=[64,83)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v192: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:23:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 10 23:23:21 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 10 23:23:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct 10 23:23:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 10 23:23:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct 10 23:23:22 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct 10 23:23:22 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 10 23:23:22 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct 10 23:23:22 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct 10 23:23:23 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 10 23:23:23 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 10 23:23:23 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 10 23:23:23 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Oct 10 23:23:23 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Oct 10 23:23:23 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct 10 23:23:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v194: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 154 B/s, 7 objects/s recovering
Oct 10 23:23:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 10 23:23:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 10 23:23:23 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct 10 23:23:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:23:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct 10 23:23:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 10 23:23:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct 10 23:23:24 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct 10 23:23:24 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 10 23:23:24 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct 10 23:23:24 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct 10 23:23:25 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 10 23:23:25 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.11 deep-scrub starts
Oct 10 23:23:25 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.11 deep-scrub ok
Oct 10 23:23:25 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.d scrub starts
Oct 10 23:23:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 130 B/s, 6 objects/s recovering
Oct 10 23:23:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 10 23:23:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 10 23:23:25 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.d scrub ok
Oct 10 23:23:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct 10 23:23:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 10 23:23:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct 10 23:23:26 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct 10 23:23:26 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 10 23:23:26 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 10 23:23:26 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 10 23:23:27 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct 10 23:23:27 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct 10 23:23:27 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 10 23:23:27 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct 10 23:23:27 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:23:27
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'backups', 'vms', 'images', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta']
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 128 B/s, 6 objects/s recovering
Oct 10 23:23:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 10 23:23:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:23:27 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 86 pg[9.c( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=86 pruub=13.718134880s) [2] r=-1 lpr=86 pi=[56,86)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 186.183746338s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:27 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 87 pg[9.c( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=86 pruub=13.718037605s) [2] r=-1 lpr=86 pi=[56,86)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.183746338s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:27 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 86 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=86 pruub=13.717743874s) [2] r=-1 lpr=86 pi=[56,86)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 186.184478760s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:27 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 87 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=86 pruub=13.717635155s) [2] r=-1 lpr=86 pi=[56,86)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.184478760s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:27 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 87 pg[9.c( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=86) [2] r=0 lpr=87 pi=[56,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:27 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 87 pg[9.1c( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=86) [2] r=0 lpr=87 pi=[56,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:23:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct 10 23:23:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 10 23:23:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct 10 23:23:28 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 10 23:23:28 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 88 pg[9.1c( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[56,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:28 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 88 pg[9.1c( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[56,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:28 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 88 pg[9.c( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[56,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:28 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 88 pg[9.c( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[56,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:28 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct 10 23:23:28 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 88 pg[9.c( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=88) [2]/[1] r=0 lpr=88 pi=[56,88)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:28 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 88 pg[9.c( v 53'585 (0'0,53'585] local-lis/les=56/57 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=88) [2]/[1] r=0 lpr=88 pi=[56,88)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:28 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 88 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=88) [2]/[1] r=0 lpr=88 pi=[56,88)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:28 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 88 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=88) [2]/[1] r=0 lpr=88 pi=[56,88)/1 crt=53'585 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:28 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct 10 23:23:28 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct 10 23:23:28 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 10 23:23:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:23:28 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 10 23:23:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct 10 23:23:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct 10 23:23:29 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct 10 23:23:29 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 10 23:23:29 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 89 pg[9.c( v 53'585 (0'0,53'585] local-lis/les=88/89 n=7 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=88) [2]/[1] async=[2] r=0 lpr=88 pi=[56,88)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:29 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 89 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=88/89 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=88) [2]/[1] async=[2] r=0 lpr=88 pi=[56,88)/1 crt=53'585 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:29 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Oct 10 23:23:29 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Oct 10 23:23:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v201: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:23:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 10 23:23:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 10 23:23:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct 10 23:23:30 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 10 23:23:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 10 23:23:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct 10 23:23:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct 10 23:23:30 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 90 pg[9.c( v 53'585 (0'0,53'585] local-lis/les=88/89 n=7 ec=56/47 lis/c=88/56 les/c/f=89/57/0 sis=90 pruub=15.224799156s) [2] async=[2] r=-1 lpr=90 pi=[56,90)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 190.210815430s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:30 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 90 pg[9.c( v 53'585 (0'0,53'585] local-lis/les=88/89 n=7 ec=56/47 lis/c=88/56 les/c/f=89/57/0 sis=90 pruub=15.224540710s) [2] r=-1 lpr=90 pi=[56,90)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.210815430s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:30 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 90 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=88/89 n=6 ec=56/47 lis/c=88/56 les/c/f=89/57/0 sis=90 pruub=15.223984718s) [2] async=[2] r=-1 lpr=90 pi=[56,90)/1 crt=53'585 lcod 0'0 mlcod 0'0 active pruub 190.210937500s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:30 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 90 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=88/89 n=6 ec=56/47 lis/c=88/56 les/c/f=89/57/0 sis=90 pruub=15.223780632s) [2] r=-1 lpr=90 pi=[56,90)/1 crt=53'585 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.210937500s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:30 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 90 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=88/56 les/c/f=89/57/0 sis=90) [2] r=0 lpr=90 pi=[56,90)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:30 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 90 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=88/56 les/c/f=89/57/0 sis=90) [2] r=0 lpr=90 pi=[56,90)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:30 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 90 pg[9.c( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=88/56 les/c/f=89/57/0 sis=90) [2] r=0 lpr=90 pi=[56,90)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:30 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 90 pg[9.c( v 53'585 (0'0,53'585] local-lis/les=0/0 n=7 ec=56/47 lis/c=88/56 les/c/f=89/57/0 sis=90) [2] r=0 lpr=90 pi=[56,90)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:30 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct 10 23:23:30 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct 10 23:23:31 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct 10 23:23:31 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct 10 23:23:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct 10 23:23:31 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 10 23:23:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct 10 23:23:31 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct 10 23:23:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 91 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=90/91 n=6 ec=56/47 lis/c=88/56 les/c/f=89/57/0 sis=90) [2] r=0 lpr=90 pi=[56,90)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:31 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 91 pg[9.c( v 53'585 (0'0,53'585] local-lis/les=90/91 n=7 ec=56/47 lis/c=88/56 les/c/f=89/57/0 sis=90) [2] r=0 lpr=90 pi=[56,90)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:31 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct 10 23:23:31 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct 10 23:23:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v204: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:23:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct 10 23:23:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 10 23:23:32 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct 10 23:23:32 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct 10 23:23:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 10 23:23:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct 10 23:23:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 10 23:23:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct 10 23:23:32 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct 10 23:23:32 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct 10 23:23:32 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct 10 23:23:33 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct 10 23:23:33 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct 10 23:23:33 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 10 23:23:33 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 10 23:23:33 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 10 23:23:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v206: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 2 objects/s recovering
Oct 10 23:23:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct 10 23:23:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 10 23:23:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:23:34 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct 10 23:23:34 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct 10 23:23:34 np0005480824 systemd-logind[782]: New session 35 of user zuul.
Oct 10 23:23:34 np0005480824 systemd[1]: Started Session 35 of User zuul.
Oct 10 23:23:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct 10 23:23:34 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 10 23:23:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 10 23:23:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct 10 23:23:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct 10 23:23:34 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct 10 23:23:34 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct 10 23:23:35 np0005480824 python3.9[106147]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 10 23:23:35 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Oct 10 23:23:35 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Oct 10 23:23:35 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct 10 23:23:35 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct 10 23:23:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 10 23:23:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v208: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 2 objects/s recovering
Oct 10 23:23:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct 10 23:23:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 10 23:23:36 np0005480824 python3.9[106321]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:23:36 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 10 23:23:36 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 10 23:23:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct 10 23:23:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 10 23:23:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct 10 23:23:36 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 10 23:23:36 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:23:37 np0005480824 python3.9[106477]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:23:37 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct 10 23:23:37 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct 10 23:23:37 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 10 23:23:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v210: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct 10 23:23:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct 10 23:23:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 10 23:23:38 np0005480824 python3.9[106630]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:23:38 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct 10 23:23:38 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct 10 23:23:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct 10 23:23:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 10 23:23:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct 10 23:23:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct 10 23:23:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 95 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=64/65 n=6 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=95 pruub=12.055387497s) [2] r=-1 lpr=95 pi=[64,95)/1 crt=53'585 mlcod 0'0 active pruub 200.964782715s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 95 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=64/65 n=6 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=95 pruub=12.055339813s) [2] r=-1 lpr=95 pi=[64,95)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 200.964782715s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 10 23:23:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 95 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=95) [2] r=0 lpr=95 pi=[64,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:23:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct 10 23:23:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct 10 23:23:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct 10 23:23:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 96 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=96) [2]/[0] r=-1 lpr=96 pi=[64,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:38 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 96 pg[9.13( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=96) [2]/[0] r=-1 lpr=96 pi=[64,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 96 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=64/65 n=6 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=96) [2]/[0] r=0 lpr=96 pi=[64,96)/1 crt=53'585 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:38 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 96 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=64/65 n=6 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=96) [2]/[0] r=0 lpr=96 pi=[64,96)/1 crt=53'585 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:39 np0005480824 python3.9[106784]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:23:39 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct 10 23:23:39 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct 10 23:23:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v213: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:23:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct 10 23:23:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 10 23:23:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct 10 23:23:39 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 10 23:23:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 10 23:23:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct 10 23:23:40 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct 10 23:23:40 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 97 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=96/97 n=6 ec=56/47 lis/c=64/64 les/c/f=65/65/0 sis=96) [2]/[0] async=[2] r=0 lpr=96 pi=[64,96)/1 crt=53'585 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:40 np0005480824 python3.9[106934]: ansible-ansible.builtin.service_facts Invoked
Oct 10 23:23:40 np0005480824 network[106951]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 23:23:40 np0005480824 network[106952]: 'network-scripts' will be removed from distribution in near future.
Oct 10 23:23:40 np0005480824 network[106953]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 23:23:40 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 10 23:23:40 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 10 23:23:41 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 10 23:23:41 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 10 23:23:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct 10 23:23:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct 10 23:23:41 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct 10 23:23:41 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 98 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=96/97 n=6 ec=56/47 lis/c=96/64 les/c/f=97/65/0 sis=98 pruub=15.014370918s) [2] async=[2] r=-1 lpr=98 pi=[64,98)/1 crt=53'585 mlcod 53'585 active pruub 206.227401733s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:41 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 98 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=96/97 n=6 ec=56/47 lis/c=96/64 les/c/f=97/65/0 sis=98 pruub=15.014278412s) [2] r=-1 lpr=98 pi=[64,98)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 206.227401733s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:41 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 98 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=96/64 les/c/f=97/65/0 sis=98) [2] r=0 lpr=98 pi=[64,98)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:41 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 98 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=96/64 les/c/f=97/65/0 sis=98) [2] r=0 lpr=98 pi=[64,98)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v216: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:23:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct 10 23:23:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 10 23:23:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct 10 23:23:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 10 23:23:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct 10 23:23:42 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct 10 23:23:42 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 99 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=72/73 n=6 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=99 pruub=11.436083794s) [1] r=-1 lpr=99 pi=[72,99)/1 crt=53'585 mlcod 0'0 active pruub 204.059997559s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:42 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 99 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=72/73 n=6 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=99 pruub=11.434958458s) [1] r=-1 lpr=99 pi=[72,99)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 204.059997559s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:42 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 10 23:23:42 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 99 pg[9.13( v 53'585 (0'0,53'585] local-lis/les=98/99 n=6 ec=56/47 lis/c=96/64 les/c/f=97/65/0 sis=98) [2] r=0 lpr=98 pi=[64,98)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:42 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:42 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct 10 23:23:42 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct 10 23:23:43 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct 10 23:23:43 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct 10 23:23:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct 10 23:23:43 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct 10 23:23:43 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct 10 23:23:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct 10 23:23:43 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct 10 23:23:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v219: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 10 23:23:43 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 10 23:23:43 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[72,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:43 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 100 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=72/73 n=6 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[0] r=0 lpr=100 pi=[72,100)/1 crt=53'585 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:43 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 100 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=72/73 n=6 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[0] r=0 lpr=100 pi=[72,100)/1 crt=53'585 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:43 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[72,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:23:44 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.11 deep-scrub starts
Oct 10 23:23:44 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.11 deep-scrub ok
Oct 10 23:23:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct 10 23:23:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct 10 23:23:45 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct 10 23:23:45 np0005480824 python3.9[107216]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:23:45 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.f scrub starts
Oct 10 23:23:45 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 2.f scrub ok
Oct 10 23:23:45 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 10 23:23:45 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 10 23:23:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v221: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Oct 10 23:23:45 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct 10 23:23:45 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct 10 23:23:46 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 101 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=100/101 n=6 ec=56/47 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[72,100)/1 crt=53'585 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:46 np0005480824 python3.9[107366]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:23:46 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 10 23:23:46 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 10 23:23:47 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 10 23:23:47 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 10 23:23:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct 10 23:23:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct 10 23:23:47 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct 10 23:23:47 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 102 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=100/101 n=6 ec=56/47 lis/c=100/72 les/c/f=101/73/0 sis=102 pruub=14.806353569s) [1] async=[1] r=-1 lpr=102 pi=[72,102)/1 crt=53'585 mlcod 53'585 active pruub 212.121047974s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:47 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 102 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=100/101 n=6 ec=56/47 lis/c=100/72 les/c/f=101/73/0 sis=102 pruub=14.806201935s) [1] r=-1 lpr=102 pi=[72,102)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 212.121047974s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:47 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 102 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=100/72 les/c/f=101/73/0 sis=102) [1] r=0 lpr=102 pi=[72,102)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:47 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 102 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=100/72 les/c/f=101/73/0 sis=102) [1] r=0 lpr=102 pi=[72,102)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:47 np0005480824 python3.9[107520]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:23:47 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct 10 23:23:47 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct 10 23:23:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v223: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 1 objects/s recovering
Oct 10 23:23:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct 10 23:23:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 10 23:23:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct 10 23:23:48 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 10 23:23:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 10 23:23:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct 10 23:23:48 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct 10 23:23:48 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 103 pg[9.15( v 53'585 (0'0,53'585] local-lis/les=102/103 n=6 ec=56/47 lis/c=100/72 les/c/f=101/73/0 sis=102) [1] r=0 lpr=102 pi=[72,102)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:48 np0005480824 python3.9[107678]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:23:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:23:49 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 10 23:23:49 np0005480824 python3.9[107762]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:23:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 103 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=79/80 n=6 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=103 pruub=15.598739624s) [0] r=-1 lpr=103 pi=[79,103)/1 crt=53'585 mlcod 0'0 active pruub 204.766464233s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:49 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 103 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=79/80 n=6 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=103 pruub=15.598184586s) [0] r=-1 lpr=103 pi=[79,103)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 204.766464233s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:49 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 103 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=103) [0] r=0 lpr=103 pi=[79,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v225: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 10 23:23:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct 10 23:23:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 10 23:23:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct 10 23:23:50 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 10 23:23:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 10 23:23:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct 10 23:23:50 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct 10 23:23:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 104 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:50 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 104 pg[9.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[79,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 104 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=79/80 n=6 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=104) [0]/[2] r=0 lpr=104 pi=[79,104)/1 crt=53'585 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:50 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 104 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=79/80 n=6 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=104) [0]/[2] r=0 lpr=104 pi=[79,104)/1 crt=53'585 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:50 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct 10 23:23:50 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct 10 23:23:51 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Oct 10 23:23:51 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Oct 10 23:23:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct 10 23:23:51 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 10 23:23:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct 10 23:23:51 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct 10 23:23:51 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct 10 23:23:51 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct 10 23:23:51 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 10 23:23:51 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 10 23:23:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v228: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:23:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct 10 23:23:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 10 23:23:51 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 105 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=104/105 n=6 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[79,104)/1 crt=53'585 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct 10 23:23:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 10 23:23:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct 10 23:23:52 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct 10 23:23:52 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 106 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=104/79 les/c/f=105/80/0 sis=106) [0] r=0 lpr=106 pi=[79,106)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:52 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 106 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=104/79 les/c/f=105/80/0 sis=106) [0] r=0 lpr=106 pi=[79,106)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:52 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 106 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=104/105 n=6 ec=56/47 lis/c=104/79 les/c/f=105/80/0 sis=106 pruub=15.655558586s) [0] async=[0] r=-1 lpr=106 pi=[79,106)/1 crt=53'585 mlcod 53'585 active pruub 207.472564697s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:52 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 106 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=104/105 n=6 ec=56/47 lis/c=104/79 les/c/f=105/80/0 sis=106 pruub=15.655476570s) [0] r=-1 lpr=106 pi=[79,106)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 207.472564697s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:52 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 10 23:23:52 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 10 23:23:52 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 10 23:23:52 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.d deep-scrub starts
Oct 10 23:23:52 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.d deep-scrub ok
Oct 10 23:23:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct 10 23:23:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct 10 23:23:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct 10 23:23:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 10 23:23:53 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 107 pg[9.16( v 53'585 (0'0,53'585] local-lis/les=106/107 n=6 ec=56/47 lis/c=104/79 les/c/f=105/80/0 sis=106) [0] r=0 lpr=106 pi=[79,106)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 10 23:23:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct 10 23:23:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 10 23:23:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:23:54 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct 10 23:23:54 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct 10 23:23:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct 10 23:23:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 10 23:23:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct 10 23:23:54 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct 10 23:23:54 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 10 23:23:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 108 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=71/72 n=6 ec=56/47 lis/c=71/71 les/c/f=72/72/0 sis=108 pruub=13.505969048s) [2] r=-1 lpr=108 pi=[71,108)/1 crt=53'585 mlcod 0'0 active pruub 218.116287231s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:54 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 108 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=71/72 n=6 ec=56/47 lis/c=71/71 les/c/f=72/72/0 sis=108 pruub=13.505912781s) [2] r=-1 lpr=108 pi=[71,108)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 218.116287231s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:54 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=71/71 les/c/f=72/72/0 sis=108) [2] r=0 lpr=108 pi=[71,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:54 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct 10 23:23:54 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct 10 23:23:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct 10 23:23:55 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 10 23:23:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct 10 23:23:55 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct 10 23:23:55 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 109 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=71/72 n=6 ec=56/47 lis/c=71/71 les/c/f=72/72/0 sis=109) [2]/[0] r=0 lpr=109 pi=[71,109)/1 crt=53'585 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:55 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 109 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=71/72 n=6 ec=56/47 lis/c=71/71 les/c/f=72/72/0 sis=109) [2]/[0] r=0 lpr=109 pi=[71,109)/1 crt=53'585 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:55 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 109 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=71/71 les/c/f=72/72/0 sis=109) [2]/[0] r=-1 lpr=109 pi=[71,109)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:55 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 109 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=71/71 les/c/f=72/72/0 sis=109) [2]/[0] r=-1 lpr=109 pi=[71,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v234: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 10 23:23:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct 10 23:23:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 10 23:23:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct 10 23:23:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 10 23:23:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct 10 23:23:56 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct 10 23:23:56 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 10 23:23:56 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 110 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=109/110 n=6 ec=56/47 lis/c=71/71 les/c/f=72/72/0 sis=109) [2]/[0] async=[2] r=0 lpr=109 pi=[71,109)/1 crt=53'585 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:57 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct 10 23:23:57 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct 10 23:23:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct 10 23:23:57 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 10 23:23:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct 10 23:23:57 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct 10 23:23:57 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 111 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=109/71 les/c/f=110/72/0 sis=111) [2] r=0 lpr=111 pi=[71,111)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:57 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 111 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=109/71 les/c/f=110/72/0 sis=111) [2] r=0 lpr=111 pi=[71,111)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:23:57 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 111 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=109/110 n=6 ec=56/47 lis/c=109/71 les/c/f=110/72/0 sis=111 pruub=15.195292473s) [2] async=[2] r=-1 lpr=111 pi=[71,111)/1 crt=53'585 mlcod 53'585 active pruub 222.726989746s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:23:57 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 111 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=109/110 n=6 ec=56/47 lis/c=109/71 les/c/f=110/72/0 sis=111 pruub=15.195062637s) [2] r=-1 lpr=111 pi=[71,111)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 222.726989746s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:23:57 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct 10 23:23:57 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct 10 23:23:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:23:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:23:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v237: 321 pgs: 1 active+recovering+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 8/247 objects misplaced (3.239%); 27 B/s, 0 objects/s recovering
Oct 10 23:23:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:23:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:23:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct 10 23:23:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 10 23:23:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:23:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:23:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct 10 23:23:58 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 10 23:23:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 10 23:23:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct 10 23:23:58 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct 10 23:23:58 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 112 pg[9.19( v 53'585 (0'0,53'585] local-lis/les=111/112 n=6 ec=56/47 lis/c=109/71 les/c/f=110/72/0 sis=111) [2] r=0 lpr=111 pi=[71,111)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:23:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:23:59 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 10 23:23:59 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct 10 23:23:59 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct 10 23:23:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v239: 321 pgs: 1 active+recovering+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 8/247 objects misplaced (3.239%); 24 B/s, 0 objects/s recovering
Oct 10 23:23:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct 10 23:23:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 10 23:24:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct 10 23:24:00 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 10 23:24:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 10 23:24:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct 10 23:24:00 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct 10 23:24:00 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 113 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=90/91 n=6 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=113 pruub=11.074808121s) [0] r=-1 lpr=113 pi=[90,113)/1 crt=53'585 mlcod 0'0 active pruub 211.069747925s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:00 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 113 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=90/91 n=6 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=113 pruub=11.074700356s) [0] r=-1 lpr=113 pi=[90,113)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 211.069747925s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:24:00 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=113) [0] r=0 lpr=113 pi=[90,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:24:00 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct 10 23:24:00 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct 10 23:24:00 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct 10 23:24:00 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct 10 23:24:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct 10 23:24:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 10 23:24:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct 10 23:24:01 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct 10 23:24:01 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 114 pg[9.1c( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[90,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:01 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 114 pg[9.1c( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[90,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:24:01 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 114 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=90/91 n=6 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=114) [0]/[2] r=0 lpr=114 pi=[90,114)/1 crt=53'585 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:01 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 114 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=90/91 n=6 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=114) [0]/[2] r=0 lpr=114 pi=[90,114)/1 crt=53'585 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:24:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v242: 321 pgs: 1 active+recovering+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 8/247 objects misplaced (3.239%)
Oct 10 23:24:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct 10 23:24:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 10 23:24:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct 10 23:24:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 10 23:24:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct 10 23:24:02 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct 10 23:24:02 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 10 23:24:02 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct 10 23:24:02 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct 10 23:24:02 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 10 23:24:02 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 10 23:24:03 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct 10 23:24:03 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 115 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=114/115 n=6 ec=56/47 lis/c=90/90 les/c/f=91/91/0 sis=114) [0]/[2] async=[0] r=0 lpr=114 pi=[90,114)/1 crt=53'585 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:24:03 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct 10 23:24:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct 10 23:24:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 10 23:24:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct 10 23:24:03 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct 10 23:24:03 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 116 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=114/115 n=6 ec=56/47 lis/c=114/90 les/c/f=115/91/0 sis=116 pruub=15.637898445s) [0] async=[0] r=-1 lpr=116 pi=[90,116)/1 crt=53'585 mlcod 53'585 active pruub 218.692459106s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:03 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 116 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=114/115 n=6 ec=56/47 lis/c=114/90 les/c/f=115/91/0 sis=116 pruub=15.637509346s) [0] r=-1 lpr=116 pi=[90,116)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 218.692459106s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:24:03 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 116 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=114/90 les/c/f=115/91/0 sis=116) [0] r=0 lpr=116 pi=[90,116)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:03 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 116 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=114/90 les/c/f=115/91/0 sis=116) [0] r=0 lpr=116 pi=[90,116)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:24:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v245: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Oct 10 23:24:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:24:04 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 10 23:24:04 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 10 23:24:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct 10 23:24:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct 10 23:24:04 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct 10 23:24:04 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 117 pg[9.1c( v 53'585 (0'0,53'585] local-lis/les=116/117 n=6 ec=56/47 lis/c=114/90 les/c/f=115/91/0 sis=116) [0] r=0 lpr=116 pi=[90,116)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:24:05 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct 10 23:24:05 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct 10 23:24:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v247: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Oct 10 23:24:06 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 10 23:24:06 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 10 23:24:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v248: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 2 objects/s recovering
Oct 10 23:24:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct 10 23:24:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 10 23:24:08 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 10 23:24:08 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 10 23:24:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct 10 23:24:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 10 23:24:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct 10 23:24:08 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct 10 23:24:08 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 10 23:24:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:24:09 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 10 23:24:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v250: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 1 objects/s recovering
Oct 10 23:24:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 10 23:24:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:24:10 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 118 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=79/80 n=6 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=118 pruub=11.123217583s) [0] r=-1 lpr=118 pi=[79,118)/1 crt=53'585 mlcod 0'0 active pruub 220.766189575s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:10 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 118 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=79/80 n=6 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=118 pruub=11.123173714s) [0] r=-1 lpr=118 pi=[79,118)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 220.766189575s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:24:10 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 118 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=118) [0] r=0 lpr=118 pi=[79,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:24:10 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct 10 23:24:10 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct 10 23:24:10 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.18 deep-scrub starts
Oct 10 23:24:10 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.18 deep-scrub ok
Oct 10 23:24:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct 10 23:24:10 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 23:24:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:24:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct 10 23:24:10 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct 10 23:24:10 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 119 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[79,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:10 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 119 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[79,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:24:10 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 119 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=79/80 n=6 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=119) [0]/[2] r=0 lpr=119 pi=[79,119)/1 crt=53'585 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:10 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 119 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=79/80 n=6 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=119) [0]/[2] r=0 lpr=119 pi=[79,119)/1 crt=53'585 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:24:10 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 119 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=82/83 n=6 ec=56/47 lis/c=82/82 les/c/f=83/83/0 sis=119 pruub=13.274467468s) [1] r=-1 lpr=119 pi=[82,119)/1 crt=53'585 mlcod 0'0 active pruub 223.458099365s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:10 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 119 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=82/83 n=6 ec=56/47 lis/c=82/82 les/c/f=83/83/0 sis=119 pruub=13.274421692s) [1] r=-1 lpr=119 pi=[82,119)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 223.458099365s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:24:10 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=82/82 les/c/f=83/83/0 sis=119) [1] r=0 lpr=119 pi=[82,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:24:11 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 10 23:24:11 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 10 23:24:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct 10 23:24:11 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 23:24:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct 10 23:24:11 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct 10 23:24:11 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 120 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=82/83 n=6 ec=56/47 lis/c=82/82 les/c/f=83/83/0 sis=120) [1]/[2] r=0 lpr=120 pi=[82,120)/1 crt=53'585 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:11 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 120 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=82/83 n=6 ec=56/47 lis/c=82/82 les/c/f=83/83/0 sis=120) [1]/[2] r=0 lpr=120 pi=[82,120)/1 crt=53'585 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 23:24:11 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 120 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=82/82 les/c/f=83/83/0 sis=120) [1]/[2] r=-1 lpr=120 pi=[82,120)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:11 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 120 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=82/82 les/c/f=83/83/0 sis=120) [1]/[2] r=-1 lpr=120 pi=[82,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 23:24:11 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.17 deep-scrub starts
Oct 10 23:24:11 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.17 deep-scrub ok
Oct 10 23:24:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v253: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct 10 23:24:12 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.6 deep-scrub starts
Oct 10 23:24:12 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 120 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=119/120 n=6 ec=56/47 lis/c=79/79 les/c/f=80/80/0 sis=119) [0]/[2] async=[0] r=0 lpr=119 pi=[79,119)/1 crt=53'585 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:24:12 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.6 deep-scrub ok
Oct 10 23:24:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct 10 23:24:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct 10 23:24:12 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct 10 23:24:12 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 121 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=119/120 n=6 ec=56/47 lis/c=119/79 les/c/f=120/80/0 sis=121 pruub=15.591158867s) [0] async=[0] r=-1 lpr=121 pi=[79,121)/1 crt=53'585 mlcod 53'585 active pruub 227.749435425s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:12 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 121 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=119/120 n=6 ec=56/47 lis/c=119/79 les/c/f=120/80/0 sis=121 pruub=15.590725899s) [0] r=-1 lpr=121 pi=[79,121)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 227.749435425s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:24:12 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 121 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=119/79 les/c/f=120/80/0 sis=121) [0] r=0 lpr=121 pi=[79,121)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:12 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 121 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=119/79 les/c/f=120/80/0 sis=121) [0] r=0 lpr=121 pi=[79,121)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:24:12 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 121 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=120/121 n=6 ec=56/47 lis/c=82/82 les/c/f=83/83/0 sis=120) [1]/[2] async=[1] r=0 lpr=120 pi=[82,120)/1 crt=53'585 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:24:12 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 10 23:24:12 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 10 23:24:13 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct 10 23:24:13 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct 10 23:24:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct 10 23:24:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct 10 23:24:13 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct 10 23:24:13 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 122 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=120/121 n=6 ec=56/47 lis/c=120/82 les/c/f=121/83/0 sis=122 pruub=14.993274689s) [1] async=[1] r=-1 lpr=122 pi=[82,122)/1 crt=53'585 mlcod 53'585 active pruub 228.169586182s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:13 np0005480824 ceph-osd[90443]: osd.2 pg_epoch: 122 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=120/121 n=6 ec=56/47 lis/c=120/82 les/c/f=121/83/0 sis=122 pruub=14.993132591s) [1] r=-1 lpr=122 pi=[82,122)/1 crt=53'585 mlcod 0'0 unknown NOTIFY pruub 228.169586182s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 23:24:13 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 122 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=120/82 les/c/f=121/83/0 sis=122) [1] r=0 lpr=122 pi=[82,122)/1 luod=0'0 crt=53'585 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 10 23:24:13 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 122 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=0/0 n=6 ec=56/47 lis/c=120/82 les/c/f=121/83/0 sis=122) [1] r=0 lpr=122 pi=[82,122)/1 crt=53'585 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 23:24:13 np0005480824 ceph-osd[88325]: osd.0 pg_epoch: 122 pg[9.1e( v 53'585 (0'0,53'585] local-lis/les=121/122 n=6 ec=56/47 lis/c=119/79 les/c/f=120/80/0 sis=121) [0] r=0 lpr=121 pi=[79,121)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:24:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 1 remapped+peering, 1 active+remapped, 319 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Oct 10 23:24:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:24:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 10 23:24:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 10 23:24:14 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 10 23:24:14 np0005480824 ceph-osd[89401]: osd.1 pg_epoch: 123 pg[9.1f( v 53'585 (0'0,53'585] local-lis/les=122/123 n=6 ec=56/47 lis/c=120/82 les/c/f=121/83/0 sis=122) [1] r=0 lpr=122 pi=[82,122)/1 crt=53'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 23:24:14 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.1 deep-scrub starts
Oct 10 23:24:14 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.1 deep-scrub ok
Oct 10 23:24:15 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.1b deep-scrub starts
Oct 10 23:24:15 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.1b deep-scrub ok
Oct 10 23:24:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 1 remapped+peering, 1 active+remapped, 319 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Oct 10 23:24:16 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.1c deep-scrub starts
Oct 10 23:24:16 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.1c deep-scrub ok
Oct 10 23:24:16 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct 10 23:24:16 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct 10 23:24:17 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts
Oct 10 23:24:17 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok
Oct 10 23:24:17 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct 10 23:24:17 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct 10 23:24:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct 10 23:24:18 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct 10 23:24:18 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct 10 23:24:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:24:19 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct 10 23:24:19 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct 10 23:24:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:24:20 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 1ece89fa-acf3-47e3-9dee-7ba4548724a2 does not exist
Oct 10 23:24:20 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 2a730cfc-0717-4a56-b5d9-afdd181f1d41 does not exist
Oct 10 23:24:20 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 64ef7f2a-7d77-4925-a53f-3474c279bc36 does not exist
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:24:20 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:24:20 np0005480824 podman[108175]: 2025-10-11 03:24:20.654925193 +0000 UTC m=+0.057058334 container create 8f9d3af636d61d920c20f1adc5e1604c0e315db8e7fcffba33083a46e8c8593a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:24:20 np0005480824 systemd[1]: Started libpod-conmon-8f9d3af636d61d920c20f1adc5e1604c0e315db8e7fcffba33083a46e8c8593a.scope.
Oct 10 23:24:20 np0005480824 podman[108175]: 2025-10-11 03:24:20.635962099 +0000 UTC m=+0.038095240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:24:20 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:24:20 np0005480824 podman[108175]: 2025-10-11 03:24:20.774273166 +0000 UTC m=+0.176406367 container init 8f9d3af636d61d920c20f1adc5e1604c0e315db8e7fcffba33083a46e8c8593a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:24:20 np0005480824 podman[108175]: 2025-10-11 03:24:20.781017181 +0000 UTC m=+0.183150302 container start 8f9d3af636d61d920c20f1adc5e1604c0e315db8e7fcffba33083a46e8c8593a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 23:24:20 np0005480824 podman[108175]: 2025-10-11 03:24:20.78463726 +0000 UTC m=+0.186770461 container attach 8f9d3af636d61d920c20f1adc5e1604c0e315db8e7fcffba33083a46e8c8593a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_borg, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 10 23:24:20 np0005480824 priceless_borg[108191]: 167 167
Oct 10 23:24:20 np0005480824 systemd[1]: libpod-8f9d3af636d61d920c20f1adc5e1604c0e315db8e7fcffba33083a46e8c8593a.scope: Deactivated successfully.
Oct 10 23:24:20 np0005480824 conmon[108191]: conmon 8f9d3af636d61d920c20 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8f9d3af636d61d920c20f1adc5e1604c0e315db8e7fcffba33083a46e8c8593a.scope/container/memory.events
Oct 10 23:24:20 np0005480824 podman[108196]: 2025-10-11 03:24:20.823441776 +0000 UTC m=+0.023584747 container died 8f9d3af636d61d920c20f1adc5e1604c0e315db8e7fcffba33083a46e8c8593a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_borg, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:24:20 np0005480824 systemd[1]: var-lib-containers-storage-overlay-67f1f8185dcb890ab289faee7d63590c589151af037bf05248d9a8d210af897a-merged.mount: Deactivated successfully.
Oct 10 23:24:20 np0005480824 podman[108196]: 2025-10-11 03:24:20.85963127 +0000 UTC m=+0.059774211 container remove 8f9d3af636d61d920c20f1adc5e1604c0e315db8e7fcffba33083a46e8c8593a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_borg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:24:20 np0005480824 systemd[1]: libpod-conmon-8f9d3af636d61d920c20f1adc5e1604c0e315db8e7fcffba33083a46e8c8593a.scope: Deactivated successfully.
Oct 10 23:24:21 np0005480824 podman[108218]: 2025-10-11 03:24:21.082396628 +0000 UTC m=+0.069385574 container create e230428ba77294824c985a210fcc1178a81f72f92154b8c6356f2430abd72d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:24:21 np0005480824 systemd[1]: Started libpod-conmon-e230428ba77294824c985a210fcc1178a81f72f92154b8c6356f2430abd72d59.scope.
Oct 10 23:24:21 np0005480824 podman[108218]: 2025-10-11 03:24:21.053905273 +0000 UTC m=+0.040894289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:24:21 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:24:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7e7cfbbbd708614ad6c5e2afc559739536eaa16c8cf6690c0cd2521fa48440/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:24:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7e7cfbbbd708614ad6c5e2afc559739536eaa16c8cf6690c0cd2521fa48440/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:24:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7e7cfbbbd708614ad6c5e2afc559739536eaa16c8cf6690c0cd2521fa48440/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:24:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7e7cfbbbd708614ad6c5e2afc559739536eaa16c8cf6690c0cd2521fa48440/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:24:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7e7cfbbbd708614ad6c5e2afc559739536eaa16c8cf6690c0cd2521fa48440/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:24:21 np0005480824 podman[108218]: 2025-10-11 03:24:21.170781236 +0000 UTC m=+0.157770202 container init e230428ba77294824c985a210fcc1178a81f72f92154b8c6356f2430abd72d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 23:24:21 np0005480824 podman[108218]: 2025-10-11 03:24:21.183722192 +0000 UTC m=+0.170711138 container start e230428ba77294824c985a210fcc1178a81f72f92154b8c6356f2430abd72d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:24:21 np0005480824 podman[108218]: 2025-10-11 03:24:21.188540379 +0000 UTC m=+0.175529295 container attach e230428ba77294824c985a210fcc1178a81f72f92154b8c6356f2430abd72d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 10 23:24:21 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct 10 23:24:21 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct 10 23:24:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Oct 10 23:24:22 np0005480824 recursing_mclaren[108235]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:24:22 np0005480824 recursing_mclaren[108235]: --> relative data size: 1.0
Oct 10 23:24:22 np0005480824 recursing_mclaren[108235]: --> All data devices are unavailable
Oct 10 23:24:22 np0005480824 systemd[1]: libpod-e230428ba77294824c985a210fcc1178a81f72f92154b8c6356f2430abd72d59.scope: Deactivated successfully.
Oct 10 23:24:22 np0005480824 systemd[1]: libpod-e230428ba77294824c985a210fcc1178a81f72f92154b8c6356f2430abd72d59.scope: Consumed 1.131s CPU time.
Oct 10 23:24:22 np0005480824 podman[108264]: 2025-10-11 03:24:22.414225691 +0000 UTC m=+0.030578927 container died e230428ba77294824c985a210fcc1178a81f72f92154b8c6356f2430abd72d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:24:22 np0005480824 systemd[1]: var-lib-containers-storage-overlay-4f7e7cfbbbd708614ad6c5e2afc559739536eaa16c8cf6690c0cd2521fa48440-merged.mount: Deactivated successfully.
Oct 10 23:24:22 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 10 23:24:22 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 10 23:24:22 np0005480824 podman[108264]: 2025-10-11 03:24:22.485189464 +0000 UTC m=+0.101542620 container remove e230428ba77294824c985a210fcc1178a81f72f92154b8c6356f2430abd72d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 23:24:22 np0005480824 systemd[1]: libpod-conmon-e230428ba77294824c985a210fcc1178a81f72f92154b8c6356f2430abd72d59.scope: Deactivated successfully.
Oct 10 23:24:22 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 10 23:24:22 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 10 23:24:23 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 10 23:24:23 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 10 23:24:23 np0005480824 podman[108421]: 2025-10-11 03:24:23.186675428 +0000 UTC m=+0.043590445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:24:23 np0005480824 podman[108421]: 2025-10-11 03:24:23.449168367 +0000 UTC m=+0.306083384 container create a572940bfab8a352dad67567ed43267f8585ccc630aeaa8ce29cdcc2ad18f95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williams, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:24:23 np0005480824 systemd[1]: Started libpod-conmon-a572940bfab8a352dad67567ed43267f8585ccc630aeaa8ce29cdcc2ad18f95a.scope.
Oct 10 23:24:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Oct 10 23:24:23 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:24:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:24:23 np0005480824 podman[108421]: 2025-10-11 03:24:23.974994353 +0000 UTC m=+0.831909380 container init a572940bfab8a352dad67567ed43267f8585ccc630aeaa8ce29cdcc2ad18f95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williams, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:24:23 np0005480824 podman[108421]: 2025-10-11 03:24:23.985458799 +0000 UTC m=+0.842373816 container start a572940bfab8a352dad67567ed43267f8585ccc630aeaa8ce29cdcc2ad18f95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 10 23:24:23 np0005480824 agitated_williams[108437]: 167 167
Oct 10 23:24:23 np0005480824 systemd[1]: libpod-a572940bfab8a352dad67567ed43267f8585ccc630aeaa8ce29cdcc2ad18f95a.scope: Deactivated successfully.
Oct 10 23:24:24 np0005480824 podman[108421]: 2025-10-11 03:24:24.152930857 +0000 UTC m=+1.009845844 container attach a572940bfab8a352dad67567ed43267f8585ccc630aeaa8ce29cdcc2ad18f95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williams, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:24:24 np0005480824 podman[108421]: 2025-10-11 03:24:24.153703236 +0000 UTC m=+1.010618253 container died a572940bfab8a352dad67567ed43267f8585ccc630aeaa8ce29cdcc2ad18f95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williams, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:24:24 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Oct 10 23:24:24 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Oct 10 23:24:24 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 10 23:24:24 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 10 23:24:24 np0005480824 systemd[1]: var-lib-containers-storage-overlay-77ebd6febafbbd96fd9fc39c83bfde3fdfc2b22b3138f0f88bb49405ef9bf801-merged.mount: Deactivated successfully.
Oct 10 23:24:25 np0005480824 podman[108421]: 2025-10-11 03:24:25.055272556 +0000 UTC m=+1.912187573 container remove a572940bfab8a352dad67567ed43267f8585ccc630aeaa8ce29cdcc2ad18f95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williams, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 23:24:25 np0005480824 systemd[1]: libpod-conmon-a572940bfab8a352dad67567ed43267f8585ccc630aeaa8ce29cdcc2ad18f95a.scope: Deactivated successfully.
Oct 10 23:24:25 np0005480824 podman[108462]: 2025-10-11 03:24:25.230871122 +0000 UTC m=+0.026569530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:24:25 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct 10 23:24:25 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct 10 23:24:25 np0005480824 podman[108462]: 2025-10-11 03:24:25.481807678 +0000 UTC m=+0.277506086 container create eba9757724ab803b14ac6d7c356d0f0e63babdcf7504a37e802d6615b46f6647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khayyam, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:24:25 np0005480824 systemd[1]: Started libpod-conmon-eba9757724ab803b14ac6d7c356d0f0e63babdcf7504a37e802d6615b46f6647.scope.
Oct 10 23:24:25 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:24:25 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8641da84f5977f4bcdfff32ac4f412cbb10526a917406babe0acc68e6a09c006/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:24:25 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8641da84f5977f4bcdfff32ac4f412cbb10526a917406babe0acc68e6a09c006/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:24:25 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8641da84f5977f4bcdfff32ac4f412cbb10526a917406babe0acc68e6a09c006/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:24:25 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8641da84f5977f4bcdfff32ac4f412cbb10526a917406babe0acc68e6a09c006/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:24:25 np0005480824 podman[108462]: 2025-10-11 03:24:25.748739835 +0000 UTC m=+0.544438333 container init eba9757724ab803b14ac6d7c356d0f0e63babdcf7504a37e802d6615b46f6647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:24:25 np0005480824 podman[108462]: 2025-10-11 03:24:25.756262718 +0000 UTC m=+0.551961116 container start eba9757724ab803b14ac6d7c356d0f0e63babdcf7504a37e802d6615b46f6647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:24:25 np0005480824 podman[108462]: 2025-10-11 03:24:25.770117446 +0000 UTC m=+0.565815894 container attach eba9757724ab803b14ac6d7c356d0f0e63babdcf7504a37e802d6615b46f6647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khayyam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:24:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]: {
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:    "0": [
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:        {
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "devices": [
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "/dev/loop3"
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            ],
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_name": "ceph_lv0",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_size": "21470642176",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "name": "ceph_lv0",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "tags": {
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.cluster_name": "ceph",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.crush_device_class": "",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.encrypted": "0",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.osd_id": "0",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.type": "block",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.vdo": "0"
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            },
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "type": "block",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "vg_name": "ceph_vg0"
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:        }
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:    ],
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:    "1": [
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:        {
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "devices": [
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "/dev/loop4"
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            ],
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_name": "ceph_lv1",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_size": "21470642176",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "name": "ceph_lv1",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "tags": {
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.cluster_name": "ceph",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.crush_device_class": "",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.encrypted": "0",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.osd_id": "1",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.type": "block",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.vdo": "0"
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            },
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "type": "block",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "vg_name": "ceph_vg1"
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:        }
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:    ],
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:    "2": [
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:        {
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "devices": [
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "/dev/loop5"
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            ],
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_name": "ceph_lv2",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_size": "21470642176",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "name": "ceph_lv2",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "tags": {
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.cluster_name": "ceph",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.crush_device_class": "",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.encrypted": "0",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.osd_id": "2",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.type": "block",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:                "ceph.vdo": "0"
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            },
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "type": "block",
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:            "vg_name": "ceph_vg2"
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:        }
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]:    ]
Oct 10 23:24:26 np0005480824 mystifying_khayyam[108479]: }
Oct 10 23:24:26 np0005480824 systemd[1]: libpod-eba9757724ab803b14ac6d7c356d0f0e63babdcf7504a37e802d6615b46f6647.scope: Deactivated successfully.
Oct 10 23:24:26 np0005480824 podman[108462]: 2025-10-11 03:24:26.503265584 +0000 UTC m=+1.298963982 container died eba9757724ab803b14ac6d7c356d0f0e63babdcf7504a37e802d6615b46f6647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khayyam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 10 23:24:26 np0005480824 systemd[1]: var-lib-containers-storage-overlay-8641da84f5977f4bcdfff32ac4f412cbb10526a917406babe0acc68e6a09c006-merged.mount: Deactivated successfully.
Oct 10 23:24:26 np0005480824 podman[108462]: 2025-10-11 03:24:26.575370744 +0000 UTC m=+1.371069152 container remove eba9757724ab803b14ac6d7c356d0f0e63babdcf7504a37e802d6615b46f6647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khayyam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:24:26 np0005480824 systemd[1]: libpod-conmon-eba9757724ab803b14ac6d7c356d0f0e63babdcf7504a37e802d6615b46f6647.scope: Deactivated successfully.
Oct 10 23:24:27 np0005480824 podman[108642]: 2025-10-11 03:24:27.201552611 +0000 UTC m=+0.039512326 container create 50c38e8293d91622f8f617393543e5e89b2d3b6790f027af0f2fd26c6dc92adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 10 23:24:27 np0005480824 systemd[1]: Started libpod-conmon-50c38e8293d91622f8f617393543e5e89b2d3b6790f027af0f2fd26c6dc92adf.scope.
Oct 10 23:24:27 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:24:27 np0005480824 podman[108642]: 2025-10-11 03:24:27.267550943 +0000 UTC m=+0.105510698 container init 50c38e8293d91622f8f617393543e5e89b2d3b6790f027af0f2fd26c6dc92adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 10 23:24:27 np0005480824 podman[108642]: 2025-10-11 03:24:27.274180834 +0000 UTC m=+0.112140549 container start 50c38e8293d91622f8f617393543e5e89b2d3b6790f027af0f2fd26c6dc92adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:24:27 np0005480824 podman[108642]: 2025-10-11 03:24:27.277194798 +0000 UTC m=+0.115154603 container attach 50c38e8293d91622f8f617393543e5e89b2d3b6790f027af0f2fd26c6dc92adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:24:27 np0005480824 wonderful_sutherland[108658]: 167 167
Oct 10 23:24:27 np0005480824 systemd[1]: libpod-50c38e8293d91622f8f617393543e5e89b2d3b6790f027af0f2fd26c6dc92adf.scope: Deactivated successfully.
Oct 10 23:24:27 np0005480824 podman[108642]: 2025-10-11 03:24:27.278967781 +0000 UTC m=+0.116927496 container died 50c38e8293d91622f8f617393543e5e89b2d3b6790f027af0f2fd26c6dc92adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:24:27 np0005480824 podman[108642]: 2025-10-11 03:24:27.183064439 +0000 UTC m=+0.021024214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:24:27 np0005480824 systemd[1]: var-lib-containers-storage-overlay-7e14210a78342aec7692197809949d164086603445a251701da37a7450b68336-merged.mount: Deactivated successfully.
Oct 10 23:24:27 np0005480824 podman[108642]: 2025-10-11 03:24:27.312315675 +0000 UTC m=+0.150275390 container remove 50c38e8293d91622f8f617393543e5e89b2d3b6790f027af0f2fd26c6dc92adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 23:24:27 np0005480824 systemd[1]: libpod-conmon-50c38e8293d91622f8f617393543e5e89b2d3b6790f027af0f2fd26c6dc92adf.scope: Deactivated successfully.
Oct 10 23:24:27 np0005480824 podman[108682]: 2025-10-11 03:24:27.514783477 +0000 UTC m=+0.066527784 container create cbae1ddea906ae1676e94806544cd3b6132fd19735d43a5c087bc0619978ddaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_merkle, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 10 23:24:27 np0005480824 systemd[1]: Started libpod-conmon-cbae1ddea906ae1676e94806544cd3b6132fd19735d43a5c087bc0619978ddaf.scope.
Oct 10 23:24:27 np0005480824 podman[108682]: 2025-10-11 03:24:27.486662741 +0000 UTC m=+0.038407098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:24:27 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:24:27 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6766441dfd3a60e294d47451409831941a7d747a28af9366f0918acbec8589/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:24:27 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6766441dfd3a60e294d47451409831941a7d747a28af9366f0918acbec8589/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:24:27 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6766441dfd3a60e294d47451409831941a7d747a28af9366f0918acbec8589/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:24:27 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6766441dfd3a60e294d47451409831941a7d747a28af9366f0918acbec8589/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:24:27 np0005480824 podman[108682]: 2025-10-11 03:24:27.638432627 +0000 UTC m=+0.190176934 container init cbae1ddea906ae1676e94806544cd3b6132fd19735d43a5c087bc0619978ddaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:24:27 np0005480824 podman[108682]: 2025-10-11 03:24:27.651775412 +0000 UTC m=+0.203519719 container start cbae1ddea906ae1676e94806544cd3b6132fd19735d43a5c087bc0619978ddaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_merkle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 23:24:27 np0005480824 podman[108682]: 2025-10-11 03:24:27.659811458 +0000 UTC m=+0.211555775 container attach cbae1ddea906ae1676e94806544cd3b6132fd19735d43a5c087bc0619978ddaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_merkle, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:24:27
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'volumes', '.rgw.root', 'default.rgw.control', 'vms', 'images', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:24:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:24:28 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 10 23:24:28 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]: {
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "osd_id": 0,
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "type": "bluestore"
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:    },
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "osd_id": 1,
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "type": "bluestore"
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:    },
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "osd_id": 2,
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:        "type": "bluestore"
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]:    }
Oct 10 23:24:28 np0005480824 upbeat_merkle[108698]: }
Oct 10 23:24:28 np0005480824 systemd[1]: libpod-cbae1ddea906ae1676e94806544cd3b6132fd19735d43a5c087bc0619978ddaf.scope: Deactivated successfully.
Oct 10 23:24:28 np0005480824 podman[108682]: 2025-10-11 03:24:28.798330541 +0000 UTC m=+1.350074828 container died cbae1ddea906ae1676e94806544cd3b6132fd19735d43a5c087bc0619978ddaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_merkle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:24:28 np0005480824 systemd[1]: libpod-cbae1ddea906ae1676e94806544cd3b6132fd19735d43a5c087bc0619978ddaf.scope: Consumed 1.145s CPU time.
Oct 10 23:24:28 np0005480824 systemd[1]: var-lib-containers-storage-overlay-aa6766441dfd3a60e294d47451409831941a7d747a28af9366f0918acbec8589-merged.mount: Deactivated successfully.
Oct 10 23:24:28 np0005480824 podman[108682]: 2025-10-11 03:24:28.86591278 +0000 UTC m=+1.417657047 container remove cbae1ddea906ae1676e94806544cd3b6132fd19735d43a5c087bc0619978ddaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:24:28 np0005480824 systemd[1]: libpod-conmon-cbae1ddea906ae1676e94806544cd3b6132fd19735d43a5c087bc0619978ddaf.scope: Deactivated successfully.
Oct 10 23:24:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:24:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:24:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:24:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:24:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:24:28 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 17c3d09e-93a9-4adb-9f73-0fa5e15ce206 does not exist
Oct 10 23:24:28 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 4b961fa5-f220-4817-9807-1469734e10c3 does not exist
Oct 10 23:24:28 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:24:28 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:24:29 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct 10 23:24:29 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct 10 23:24:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:30 np0005480824 python3.9[108946]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:24:30 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct 10 23:24:30 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct 10 23:24:30 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct 10 23:24:30 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct 10 23:24:31 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Oct 10 23:24:31 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Oct 10 23:24:31 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct 10 23:24:31 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct 10 23:24:31 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Oct 10 23:24:31 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Oct 10 23:24:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:32 np0005480824 python3.9[109233]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 10 23:24:32 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct 10 23:24:32 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct 10 23:24:32 np0005480824 python3.9[109385]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 10 23:24:33 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 10 23:24:33 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 10 23:24:33 np0005480824 python3.9[109537]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:24:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:24:34 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct 10 23:24:34 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct 10 23:24:34 np0005480824 python3.9[109689]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 10 23:24:35 np0005480824 python3.9[109841]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:24:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:36 np0005480824 python3.9[109993]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:24:36 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct 10 23:24:36 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct 10 23:24:36 np0005480824 python3.9[110071]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:24:37 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 10 23:24:37 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 10 23:24:37 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct 10 23:24:37 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:24:37 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Oct 10 23:24:37 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Oct 10 23:24:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:38 np0005480824 python3.9[110223]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 10 23:24:38 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 8.b deep-scrub starts
Oct 10 23:24:38 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 8.b deep-scrub ok
Oct 10 23:24:38 np0005480824 python3.9[110376]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 10 23:24:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:24:39 np0005480824 python3.9[110529]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 23:24:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:40 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct 10 23:24:40 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct 10 23:24:40 np0005480824 python3.9[110681]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 10 23:24:41 np0005480824 python3.9[110833]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:24:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:42 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct 10 23:24:42 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct 10 23:24:43 np0005480824 python3.9[110986]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:24:43 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct 10 23:24:43 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct 10 23:24:43 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct 10 23:24:43 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct 10 23:24:43 np0005480824 python3.9[111138]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:24:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:24:44 np0005480824 python3.9[111216]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:24:44 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct 10 23:24:44 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct 10 23:24:44 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct 10 23:24:44 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct 10 23:24:44 np0005480824 python3.9[111368]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:24:45 np0005480824 python3.9[111446]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:24:45 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct 10 23:24:45 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct 10 23:24:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:46 np0005480824 python3.9[111598]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:24:46 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct 10 23:24:46 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct 10 23:24:46 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Oct 10 23:24:46 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Oct 10 23:24:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:48 np0005480824 python3.9[111749]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:24:48 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 10 23:24:48 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 10 23:24:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:24:49 np0005480824 python3.9[111901]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 10 23:24:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:49 np0005480824 python3.9[112051]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:24:50 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.2 deep-scrub starts
Oct 10 23:24:50 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.2 deep-scrub ok
Oct 10 23:24:51 np0005480824 python3.9[112203]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:24:51 np0005480824 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 10 23:24:51 np0005480824 systemd[1]: tuned.service: Deactivated successfully.
Oct 10 23:24:51 np0005480824 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 10 23:24:51 np0005480824 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 10 23:24:51 np0005480824 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 10 23:24:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:52 np0005480824 python3.9[112365]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 10 23:24:52 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct 10 23:24:52 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct 10 23:24:53 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct 10 23:24:53 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct 10 23:24:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:24:54 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct 10 23:24:54 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct 10 23:24:54 np0005480824 python3.9[112517]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:24:54 np0005480824 systemd[75890]: Created slice User Background Tasks Slice.
Oct 10 23:24:54 np0005480824 systemd[75890]: Starting Cleanup of User's Temporary Files and Directories...
Oct 10 23:24:54 np0005480824 systemd[75890]: Finished Cleanup of User's Temporary Files and Directories.
Oct 10 23:24:55 np0005480824 python3.9[112672]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:24:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:56 np0005480824 systemd[1]: session-35.scope: Deactivated successfully.
Oct 10 23:24:56 np0005480824 systemd[1]: session-35.scope: Consumed 1min 2.501s CPU time.
Oct 10 23:24:56 np0005480824 systemd-logind[782]: Session 35 logged out. Waiting for processes to exit.
Oct 10 23:24:56 np0005480824 systemd-logind[782]: Removed session 35.
Oct 10 23:24:57 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct 10 23:24:57 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct 10 23:24:57 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct 10 23:24:57 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct 10 23:24:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:24:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:24:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:24:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:24:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:24:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:24:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:24:58 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct 10 23:24:58 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct 10 23:24:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:24:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:25:00 np0005480824 systemd-logind[782]: New session 36 of user zuul.
Oct 10 23:25:00 np0005480824 systemd[1]: Started Session 36 of User zuul.
Oct 10 23:25:01 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 8.10 deep-scrub starts
Oct 10 23:25:01 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 8.10 deep-scrub ok
Oct 10 23:25:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:25:01 np0005480824 python3.9[112852]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:25:02 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct 10 23:25:02 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct 10 23:25:03 np0005480824 python3.9[113008]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 10 23:25:03 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.5 deep-scrub starts
Oct 10 23:25:03 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.5 deep-scrub ok
Oct 10 23:25:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:25:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:25:04 np0005480824 python3.9[113161]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:25:04 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct 10 23:25:04 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct 10 23:25:04 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct 10 23:25:04 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct 10 23:25:05 np0005480824 python3.9[113245]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 23:25:05 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 8.f deep-scrub starts
Oct 10 23:25:05 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 8.f deep-scrub ok
Oct 10 23:25:05 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct 10 23:25:05 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct 10 23:25:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:25:06 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct 10 23:25:06 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct 10 23:25:07 np0005480824 python3.9[113398]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:25:07 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.a scrub starts
Oct 10 23:25:07 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.a scrub ok
Oct 10 23:25:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:25:08 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct 10 23:25:08 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct 10 23:25:08 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct 10 23:25:08 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct 10 23:25:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:25:09 np0005480824 python3.9[113551]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 23:25:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:25:10 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct 10 23:25:10 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct 10 23:25:10 np0005480824 python3.9[113705]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:25:11 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct 10 23:25:11 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct 10 23:25:11 np0005480824 python3.9[113857]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 10 23:25:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:25:12 np0005480824 python3.9[114007]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:25:13 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 11.e deep-scrub starts
Oct 10 23:25:13 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 11.e deep-scrub ok
Oct 10 23:25:13 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct 10 23:25:13 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct 10 23:25:13 np0005480824 python3.9[114165]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:25:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:25:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:25:15 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct 10 23:25:15 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct 10 23:25:15 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Oct 10 23:25:15 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Oct 10 23:25:15 np0005480824 python3.9[114318]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:25:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:25:16 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 10 23:25:16 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 10 23:25:17 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 10 23:25:17 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 10 23:25:17 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct 10 23:25:17 np0005480824 python3.9[114605]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 23:25:17 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct 10 23:25:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:25:18 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 10 23:25:18 np0005480824 ceph-osd[90443]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 10 23:25:18 np0005480824 python3.9[114755]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:25:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:25:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:25:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:25:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:25:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:25:58 np0005480824 rsyslogd[1004]: imjournal: 692 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 10 23:25:58 np0005480824 python3.9[119244]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:25:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:25:59 np0005480824 python3.9[119398]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:25:59 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct 10 23:25:59 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct 10 23:25:59 np0005480824 python3.9[119550]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:25:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:00 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct 10 23:26:00 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct 10 23:26:00 np0005480824 python3.9[119702]: ansible-service_facts Invoked
Oct 10 23:26:00 np0005480824 network[119719]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 23:26:00 np0005480824 network[119720]: 'network-scripts' will be removed from distribution in near future.
Oct 10 23:26:00 np0005480824 network[119721]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 23:26:01 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct 10 23:26:01 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct 10 23:26:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:02 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct 10 23:26:02 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct 10 23:26:03 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct 10 23:26:03 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct 10 23:26:03 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct 10 23:26:03 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct 10 23:26:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:26:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:04 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.10 deep-scrub starts
Oct 10 23:26:04 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.10 deep-scrub ok
Oct 10 23:26:05 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 10 23:26:05 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 10 23:26:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:07 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct 10 23:26:07 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct 10 23:26:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:08 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct 10 23:26:08 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct 10 23:26:08 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Oct 10 23:26:08 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Oct 10 23:26:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:26:09 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct 10 23:26:09 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct 10 23:26:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:10 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct 10 23:26:10 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct 10 23:26:11 np0005480824 python3.9[120176]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:26:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:13 np0005480824 python3.9[120329]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 10 23:26:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:26:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:14 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 10 23:26:14 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 10 23:26:15 np0005480824 python3.9[120481]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:26:15 np0005480824 python3.9[120559]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:16 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct 10 23:26:16 np0005480824 python3.9[120711]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:26:16 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct 10 23:26:16 np0005480824 python3.9[120789]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:17 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct 10 23:26:17 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct 10 23:26:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:18 np0005480824 python3.9[120941]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:18 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct 10 23:26:18 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct 10 23:26:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:26:19 np0005480824 python3.9[121093]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:26:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:20 np0005480824 python3.9[121177]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:26:20 np0005480824 systemd[1]: session-38.scope: Deactivated successfully.
Oct 10 23:26:20 np0005480824 systemd[1]: session-38.scope: Consumed 25.796s CPU time.
Oct 10 23:26:20 np0005480824 systemd-logind[782]: Session 38 logged out. Waiting for processes to exit.
Oct 10 23:26:20 np0005480824 systemd-logind[782]: Removed session 38.
Oct 10 23:26:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:22 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Oct 10 23:26:22 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Oct 10 23:26:23 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct 10 23:26:23 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct 10 23:26:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:26:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:25 np0005480824 systemd-logind[782]: New session 39 of user zuul.
Oct 10 23:26:25 np0005480824 systemd[1]: Started Session 39 of User zuul.
Oct 10 23:26:26 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct 10 23:26:26 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct 10 23:26:26 np0005480824 python3.9[121360]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:27 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Oct 10 23:26:27 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:26:27
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'vms', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log']
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:26:27 np0005480824 python3.9[121512]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:26:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:26:28 np0005480824 python3.9[121590]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:28 np0005480824 systemd[1]: session-39.scope: Deactivated successfully.
Oct 10 23:26:28 np0005480824 systemd[1]: session-39.scope: Consumed 1.636s CPU time.
Oct 10 23:26:28 np0005480824 systemd-logind[782]: Session 39 logged out. Waiting for processes to exit.
Oct 10 23:26:28 np0005480824 systemd-logind[782]: Removed session 39.
Oct 10 23:26:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:26:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:31 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 9.15 deep-scrub starts
Oct 10 23:26:31 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 9.15 deep-scrub ok
Oct 10 23:26:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:32 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.1b deep-scrub starts
Oct 10 23:26:32 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.1b deep-scrub ok
Oct 10 23:26:33 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct 10 23:26:33 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct 10 23:26:33 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct 10 23:26:33 np0005480824 ceph-osd[89401]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct 10 23:26:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:26:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:34 np0005480824 systemd-logind[782]: New session 40 of user zuul.
Oct 10 23:26:34 np0005480824 systemd[1]: Started Session 40 of User zuul.
Oct 10 23:26:35 np0005480824 python3.9[121769]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:26:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:36 np0005480824 python3.9[121925]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:37 np0005480824 python3.9[122175]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:26:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:26:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:26:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:26:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:26:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:26:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 9cfaf206-491c-47a0-a48e-933307498b4b does not exist
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev ba8b8c54-df36-4c0e-91a2-086d7fed023f does not exist
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev af9e778f-1a8c-40ca-9b4c-20b0dc59099b does not exist
Oct 10 23:26:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:26:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:26:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:26:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:26:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:26:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:26:37 np0005480824 python3.9[122309]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.fkbmfvo3 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:37 np0005480824 podman[122476]: 2025-10-11 03:26:37.976129119 +0000 UTC m=+0.035083482 container create 2db3ce1387261785b54b21e478d3f46c5823b7eedaf09e96017322272a8d8cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 23:26:38 np0005480824 systemd[1]: Started libpod-conmon-2db3ce1387261785b54b21e478d3f46c5823b7eedaf09e96017322272a8d8cde.scope.
Oct 10 23:26:38 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:26:38 np0005480824 podman[122476]: 2025-10-11 03:26:37.961186955 +0000 UTC m=+0.020141338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:26:38 np0005480824 podman[122476]: 2025-10-11 03:26:38.068749284 +0000 UTC m=+0.127703697 container init 2db3ce1387261785b54b21e478d3f46c5823b7eedaf09e96017322272a8d8cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chaplygin, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 10 23:26:38 np0005480824 podman[122476]: 2025-10-11 03:26:38.075292228 +0000 UTC m=+0.134246591 container start 2db3ce1387261785b54b21e478d3f46c5823b7eedaf09e96017322272a8d8cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chaplygin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 10 23:26:38 np0005480824 podman[122476]: 2025-10-11 03:26:38.078366021 +0000 UTC m=+0.137320384 container attach 2db3ce1387261785b54b21e478d3f46c5823b7eedaf09e96017322272a8d8cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chaplygin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 10 23:26:38 np0005480824 xenodochial_chaplygin[122498]: 167 167
Oct 10 23:26:38 np0005480824 systemd[1]: libpod-2db3ce1387261785b54b21e478d3f46c5823b7eedaf09e96017322272a8d8cde.scope: Deactivated successfully.
Oct 10 23:26:38 np0005480824 podman[122476]: 2025-10-11 03:26:38.081458905 +0000 UTC m=+0.140413278 container died 2db3ce1387261785b54b21e478d3f46c5823b7eedaf09e96017322272a8d8cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 10 23:26:38 np0005480824 systemd[1]: var-lib-containers-storage-overlay-64bdcfd2898a150b3b74e68be2d30f55522f798802718e278bf4d160607c56ec-merged.mount: Deactivated successfully.
Oct 10 23:26:38 np0005480824 podman[122476]: 2025-10-11 03:26:38.123727086 +0000 UTC m=+0.182681459 container remove 2db3ce1387261785b54b21e478d3f46c5823b7eedaf09e96017322272a8d8cde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chaplygin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 23:26:38 np0005480824 systemd[1]: libpod-conmon-2db3ce1387261785b54b21e478d3f46c5823b7eedaf09e96017322272a8d8cde.scope: Deactivated successfully.
Oct 10 23:26:38 np0005480824 podman[122591]: 2025-10-11 03:26:38.278057112 +0000 UTC m=+0.039675110 container create 98518e822daa1d6510c7d7a087bcbb9f1136def90a5e79af1b8e63aacae15f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_blackburn, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:26:38 np0005480824 systemd[1]: Started libpod-conmon-98518e822daa1d6510c7d7a087bcbb9f1136def90a5e79af1b8e63aacae15f64.scope.
Oct 10 23:26:38 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:26:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829a8184dfd6c6efb27c7031d8109fe62e8c86b9b93518f0e02766b5bc624ecd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:26:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829a8184dfd6c6efb27c7031d8109fe62e8c86b9b93518f0e02766b5bc624ecd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:26:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829a8184dfd6c6efb27c7031d8109fe62e8c86b9b93518f0e02766b5bc624ecd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:26:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829a8184dfd6c6efb27c7031d8109fe62e8c86b9b93518f0e02766b5bc624ecd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:26:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829a8184dfd6c6efb27c7031d8109fe62e8c86b9b93518f0e02766b5bc624ecd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:26:38 np0005480824 podman[122591]: 2025-10-11 03:26:38.352967468 +0000 UTC m=+0.114585466 container init 98518e822daa1d6510c7d7a087bcbb9f1136def90a5e79af1b8e63aacae15f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:26:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:26:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:26:38 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:26:38 np0005480824 podman[122591]: 2025-10-11 03:26:38.260486217 +0000 UTC m=+0.022104215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:26:38 np0005480824 podman[122591]: 2025-10-11 03:26:38.364293056 +0000 UTC m=+0.125911034 container start 98518e822daa1d6510c7d7a087bcbb9f1136def90a5e79af1b8e63aacae15f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 10 23:26:38 np0005480824 podman[122591]: 2025-10-11 03:26:38.367163784 +0000 UTC m=+0.128781762 container attach 98518e822daa1d6510c7d7a087bcbb9f1136def90a5e79af1b8e63aacae15f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_blackburn, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:26:38 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct 10 23:26:38 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct 10 23:26:38 np0005480824 python3.9[122663]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:26:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:26:38 np0005480824 python3.9[122743]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.maqb38vp recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:39 np0005480824 brave_blackburn[122658]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:26:39 np0005480824 brave_blackburn[122658]: --> relative data size: 1.0
Oct 10 23:26:39 np0005480824 brave_blackburn[122658]: --> All data devices are unavailable
Oct 10 23:26:39 np0005480824 systemd[1]: libpod-98518e822daa1d6510c7d7a087bcbb9f1136def90a5e79af1b8e63aacae15f64.scope: Deactivated successfully.
Oct 10 23:26:39 np0005480824 podman[122591]: 2025-10-11 03:26:39.361194917 +0000 UTC m=+1.122812895 container died 98518e822daa1d6510c7d7a087bcbb9f1136def90a5e79af1b8e63aacae15f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 10 23:26:39 np0005480824 systemd[1]: var-lib-containers-storage-overlay-829a8184dfd6c6efb27c7031d8109fe62e8c86b9b93518f0e02766b5bc624ecd-merged.mount: Deactivated successfully.
Oct 10 23:26:39 np0005480824 python3.9[122930]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:26:39 np0005480824 podman[122591]: 2025-10-11 03:26:39.639689526 +0000 UTC m=+1.401307534 container remove 98518e822daa1d6510c7d7a087bcbb9f1136def90a5e79af1b8e63aacae15f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_blackburn, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 23:26:39 np0005480824 systemd[1]: libpod-conmon-98518e822daa1d6510c7d7a087bcbb9f1136def90a5e79af1b8e63aacae15f64.scope: Deactivated successfully.
Oct 10 23:26:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:40 np0005480824 python3.9[123203]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:26:40 np0005480824 podman[123225]: 2025-10-11 03:26:40.285238103 +0000 UTC m=+0.055290122 container create 5a294f14de34307556bb72095cb6c861ff460a3425435887bce4bda974c1078f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 23:26:40 np0005480824 systemd[1]: Started libpod-conmon-5a294f14de34307556bb72095cb6c861ff460a3425435887bce4bda974c1078f.scope.
Oct 10 23:26:40 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:26:40 np0005480824 podman[123225]: 2025-10-11 03:26:40.266370715 +0000 UTC m=+0.036422744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:26:40 np0005480824 podman[123225]: 2025-10-11 03:26:40.40922585 +0000 UTC m=+0.179277909 container init 5a294f14de34307556bb72095cb6c861ff460a3425435887bce4bda974c1078f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cerf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:26:40 np0005480824 podman[123225]: 2025-10-11 03:26:40.417272621 +0000 UTC m=+0.187324670 container start 5a294f14de34307556bb72095cb6c861ff460a3425435887bce4bda974c1078f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:26:40 np0005480824 stoic_cerf[123243]: 167 167
Oct 10 23:26:40 np0005480824 systemd[1]: libpod-5a294f14de34307556bb72095cb6c861ff460a3425435887bce4bda974c1078f.scope: Deactivated successfully.
Oct 10 23:26:40 np0005480824 podman[123225]: 2025-10-11 03:26:40.428723382 +0000 UTC m=+0.198775431 container attach 5a294f14de34307556bb72095cb6c861ff460a3425435887bce4bda974c1078f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cerf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:26:40 np0005480824 podman[123225]: 2025-10-11 03:26:40.430071714 +0000 UTC m=+0.200123753 container died 5a294f14de34307556bb72095cb6c861ff460a3425435887bce4bda974c1078f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cerf, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:26:40 np0005480824 systemd[1]: var-lib-containers-storage-overlay-f3c67be1b4f3fd2576b75df432a8baacd88a96623de084e14d0130fd5fced7d5-merged.mount: Deactivated successfully.
Oct 10 23:26:40 np0005480824 podman[123225]: 2025-10-11 03:26:40.533975356 +0000 UTC m=+0.304027365 container remove 5a294f14de34307556bb72095cb6c861ff460a3425435887bce4bda974c1078f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 23:26:40 np0005480824 systemd[1]: libpod-conmon-5a294f14de34307556bb72095cb6c861ff460a3425435887bce4bda974c1078f.scope: Deactivated successfully.
Oct 10 23:26:40 np0005480824 python3.9[123337]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:26:40 np0005480824 podman[123343]: 2025-10-11 03:26:40.829651922 +0000 UTC m=+0.122397861 container create 69dfe1e27dd8aa45e5670d7f7eb1d5f4a876d6f805f832df06fd16f9217bc9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_montalcini, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 23:26:40 np0005480824 podman[123343]: 2025-10-11 03:26:40.753435216 +0000 UTC m=+0.046181195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:26:40 np0005480824 systemd[1]: Started libpod-conmon-69dfe1e27dd8aa45e5670d7f7eb1d5f4a876d6f805f832df06fd16f9217bc9dd.scope.
Oct 10 23:26:40 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:26:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7feb53748d0307cc53a6b7acbd1077f10009f719b969cbf1e649fd8dbb7e37ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:26:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7feb53748d0307cc53a6b7acbd1077f10009f719b969cbf1e649fd8dbb7e37ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:26:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7feb53748d0307cc53a6b7acbd1077f10009f719b969cbf1e649fd8dbb7e37ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:26:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7feb53748d0307cc53a6b7acbd1077f10009f719b969cbf1e649fd8dbb7e37ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:26:40 np0005480824 podman[123343]: 2025-10-11 03:26:40.964680702 +0000 UTC m=+0.257426691 container init 69dfe1e27dd8aa45e5670d7f7eb1d5f4a876d6f805f832df06fd16f9217bc9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 23:26:40 np0005480824 podman[123343]: 2025-10-11 03:26:40.977967947 +0000 UTC m=+0.270713846 container start 69dfe1e27dd8aa45e5670d7f7eb1d5f4a876d6f805f832df06fd16f9217bc9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 10 23:26:40 np0005480824 podman[123343]: 2025-10-11 03:26:40.990684748 +0000 UTC m=+0.283430747 container attach 69dfe1e27dd8aa45e5670d7f7eb1d5f4a876d6f805f832df06fd16f9217bc9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:26:41 np0005480824 python3.9[123515]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]: {
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:    "0": [
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:        {
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "devices": [
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "/dev/loop3"
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            ],
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_name": "ceph_lv0",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_size": "21470642176",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "name": "ceph_lv0",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "tags": {
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.cluster_name": "ceph",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.crush_device_class": "",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.encrypted": "0",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.osd_id": "0",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.type": "block",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.vdo": "0"
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            },
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "type": "block",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "vg_name": "ceph_vg0"
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:        }
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:    ],
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:    "1": [
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:        {
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "devices": [
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "/dev/loop4"
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            ],
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_name": "ceph_lv1",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_size": "21470642176",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "name": "ceph_lv1",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "tags": {
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.cluster_name": "ceph",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.crush_device_class": "",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.encrypted": "0",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.osd_id": "1",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.type": "block",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.vdo": "0"
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            },
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "type": "block",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "vg_name": "ceph_vg1"
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:        }
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:    ],
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:    "2": [
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:        {
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "devices": [
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "/dev/loop5"
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            ],
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_name": "ceph_lv2",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_size": "21470642176",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "name": "ceph_lv2",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "tags": {
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.cluster_name": "ceph",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.crush_device_class": "",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.encrypted": "0",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.osd_id": "2",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.type": "block",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:                "ceph.vdo": "0"
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            },
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "type": "block",
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:            "vg_name": "ceph_vg2"
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:        }
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]:    ]
Oct 10 23:26:41 np0005480824 nifty_montalcini[123389]: }
Oct 10 23:26:41 np0005480824 systemd[1]: libpod-69dfe1e27dd8aa45e5670d7f7eb1d5f4a876d6f805f832df06fd16f9217bc9dd.scope: Deactivated successfully.
Oct 10 23:26:41 np0005480824 podman[123343]: 2025-10-11 03:26:41.791188325 +0000 UTC m=+1.083934214 container died 69dfe1e27dd8aa45e5670d7f7eb1d5f4a876d6f805f832df06fd16f9217bc9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_montalcini, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:26:41 np0005480824 systemd[1]: var-lib-containers-storage-overlay-7feb53748d0307cc53a6b7acbd1077f10009f719b969cbf1e649fd8dbb7e37ee-merged.mount: Deactivated successfully.
Oct 10 23:26:41 np0005480824 podman[123343]: 2025-10-11 03:26:41.871613271 +0000 UTC m=+1.164359180 container remove 69dfe1e27dd8aa45e5670d7f7eb1d5f4a876d6f805f832df06fd16f9217bc9dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_montalcini, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:26:41 np0005480824 systemd[1]: libpod-conmon-69dfe1e27dd8aa45e5670d7f7eb1d5f4a876d6f805f832df06fd16f9217bc9dd.scope: Deactivated successfully.
Oct 10 23:26:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:42 np0005480824 python3.9[123609]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:26:42 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct 10 23:26:42 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct 10 23:26:42 np0005480824 podman[123869]: 2025-10-11 03:26:42.524613664 +0000 UTC m=+0.046205336 container create b4532a2e4d304c651f701e9e3b74c4cab738dd9bdc93ecf4e6fcfba18d50571e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 10 23:26:42 np0005480824 systemd[1]: Started libpod-conmon-b4532a2e4d304c651f701e9e3b74c4cab738dd9bdc93ecf4e6fcfba18d50571e.scope.
Oct 10 23:26:42 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:26:42 np0005480824 podman[123869]: 2025-10-11 03:26:42.506780771 +0000 UTC m=+0.028372453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:26:42 np0005480824 podman[123869]: 2025-10-11 03:26:42.61346446 +0000 UTC m=+0.135056142 container init b4532a2e4d304c651f701e9e3b74c4cab738dd9bdc93ecf4e6fcfba18d50571e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_saha, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 10 23:26:42 np0005480824 podman[123869]: 2025-10-11 03:26:42.619529243 +0000 UTC m=+0.141120905 container start b4532a2e4d304c651f701e9e3b74c4cab738dd9bdc93ecf4e6fcfba18d50571e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:26:42 np0005480824 heuristic_saha[123918]: 167 167
Oct 10 23:26:42 np0005480824 systemd[1]: libpod-b4532a2e4d304c651f701e9e3b74c4cab738dd9bdc93ecf4e6fcfba18d50571e.scope: Deactivated successfully.
Oct 10 23:26:42 np0005480824 podman[123869]: 2025-10-11 03:26:42.624636884 +0000 UTC m=+0.146228586 container attach b4532a2e4d304c651f701e9e3b74c4cab738dd9bdc93ecf4e6fcfba18d50571e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 10 23:26:42 np0005480824 podman[123869]: 2025-10-11 03:26:42.625150786 +0000 UTC m=+0.146742488 container died b4532a2e4d304c651f701e9e3b74c4cab738dd9bdc93ecf4e6fcfba18d50571e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:26:42 np0005480824 systemd[1]: var-lib-containers-storage-overlay-5b88f700825bb3d46dc67986fac9ce91ea06e9450a9ba4f05c507a432710f5ee-merged.mount: Deactivated successfully.
Oct 10 23:26:42 np0005480824 podman[123869]: 2025-10-11 03:26:42.679682508 +0000 UTC m=+0.201274200 container remove b4532a2e4d304c651f701e9e3b74c4cab738dd9bdc93ecf4e6fcfba18d50571e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_saha, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:26:42 np0005480824 systemd[1]: libpod-conmon-b4532a2e4d304c651f701e9e3b74c4cab738dd9bdc93ecf4e6fcfba18d50571e.scope: Deactivated successfully.
Oct 10 23:26:42 np0005480824 python3.9[123923]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:42 np0005480824 podman[123945]: 2025-10-11 03:26:42.881446359 +0000 UTC m=+0.078135763 container create f4dc647cddb2fc43fd31bdcb7b3070c8b362d2773752094c52c9f1276f499730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:26:42 np0005480824 systemd[1]: Started libpod-conmon-f4dc647cddb2fc43fd31bdcb7b3070c8b362d2773752094c52c9f1276f499730.scope.
Oct 10 23:26:42 np0005480824 podman[123945]: 2025-10-11 03:26:42.845967059 +0000 UTC m=+0.042656543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:26:42 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:26:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa787964a6de1675005fb230011651fdc03afe46be3b234c3a941f6d3a772d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:26:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa787964a6de1675005fb230011651fdc03afe46be3b234c3a941f6d3a772d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:26:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa787964a6de1675005fb230011651fdc03afe46be3b234c3a941f6d3a772d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:26:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa787964a6de1675005fb230011651fdc03afe46be3b234c3a941f6d3a772d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:26:42 np0005480824 podman[123945]: 2025-10-11 03:26:42.966228608 +0000 UTC m=+0.162917992 container init f4dc647cddb2fc43fd31bdcb7b3070c8b362d2773752094c52c9f1276f499730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:26:42 np0005480824 podman[123945]: 2025-10-11 03:26:42.980121588 +0000 UTC m=+0.176810992 container start f4dc647cddb2fc43fd31bdcb7b3070c8b362d2773752094c52c9f1276f499730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 10 23:26:42 np0005480824 podman[123945]: 2025-10-11 03:26:42.984380418 +0000 UTC m=+0.181069822 container attach f4dc647cddb2fc43fd31bdcb7b3070c8b362d2773752094c52c9f1276f499730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_carson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 23:26:43 np0005480824 python3.9[124118]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:26:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:26:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:43 np0005480824 nifty_carson[123975]: {
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "osd_id": 0,
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "type": "bluestore"
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:    },
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "osd_id": 1,
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "type": "bluestore"
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:    },
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "osd_id": 2,
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:        "type": "bluestore"
Oct 10 23:26:43 np0005480824 nifty_carson[123975]:    }
Oct 10 23:26:43 np0005480824 nifty_carson[123975]: }
Oct 10 23:26:43 np0005480824 systemd[1]: libpod-f4dc647cddb2fc43fd31bdcb7b3070c8b362d2773752094c52c9f1276f499730.scope: Deactivated successfully.
Oct 10 23:26:43 np0005480824 podman[123945]: 2025-10-11 03:26:43.998744173 +0000 UTC m=+1.195433557 container died f4dc647cddb2fc43fd31bdcb7b3070c8b362d2773752094c52c9f1276f499730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:26:43 np0005480824 systemd[1]: libpod-f4dc647cddb2fc43fd31bdcb7b3070c8b362d2773752094c52c9f1276f499730.scope: Consumed 1.024s CPU time.
Oct 10 23:26:44 np0005480824 systemd[1]: var-lib-containers-storage-overlay-7fa787964a6de1675005fb230011651fdc03afe46be3b234c3a941f6d3a772d4-merged.mount: Deactivated successfully.
Oct 10 23:26:44 np0005480824 podman[123945]: 2025-10-11 03:26:44.07206129 +0000 UTC m=+1.268750664 container remove f4dc647cddb2fc43fd31bdcb7b3070c8b362d2773752094c52c9f1276f499730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 23:26:44 np0005480824 systemd[1]: libpod-conmon-f4dc647cddb2fc43fd31bdcb7b3070c8b362d2773752094c52c9f1276f499730.scope: Deactivated successfully.
Oct 10 23:26:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:26:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:26:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:26:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:26:44 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev c7534325-a1bf-4cd1-9828-01fcb656afa7 does not exist
Oct 10 23:26:44 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev f03a0cde-fb44-441e-a5da-3f44c0170164 does not exist
Oct 10 23:26:44 np0005480824 python3.9[124234]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:44 np0005480824 python3.9[124440]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:26:45 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:26:45 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:26:45 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct 10 23:26:45 np0005480824 ceph-osd[88325]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct 10 23:26:45 np0005480824 python3.9[124518]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:46 np0005480824 python3.9[124670]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:26:46 np0005480824 systemd[1]: Reloading.
Oct 10 23:26:47 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:26:47 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:26:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:48 np0005480824 python3.9[124860]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:26:48 np0005480824 python3.9[124938]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:26:49 np0005480824 python3.9[125090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:26:49 np0005480824 python3.9[125168]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:50 np0005480824 python3.9[125320]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:26:50 np0005480824 systemd[1]: Reloading.
Oct 10 23:26:50 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:26:50 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:26:51 np0005480824 systemd[1]: Starting Create netns directory...
Oct 10 23:26:51 np0005480824 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 23:26:51 np0005480824 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 23:26:51 np0005480824 systemd[1]: Finished Create netns directory.
Oct 10 23:26:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:52 np0005480824 python3.9[125512]: ansible-ansible.builtin.service_facts Invoked
Oct 10 23:26:52 np0005480824 network[125529]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 23:26:52 np0005480824 network[125530]: 'network-scripts' will be removed from distribution in near future.
Oct 10 23:26:52 np0005480824 network[125531]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 23:26:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:26:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:26:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:26:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:26:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:26:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:26:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:26:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:26:58 np0005480824 python3.9[125796]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:26:58 np0005480824 python3.9[125874]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:26:59 np0005480824 python3.9[126026]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:26:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:00 np0005480824 python3.9[126178]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:27:01 np0005480824 python3.9[126256]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:02 np0005480824 python3.9[126408]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 10 23:27:02 np0005480824 systemd[1]: Starting Time & Date Service...
Oct 10 23:27:02 np0005480824 systemd[1]: Started Time & Date Service.
Oct 10 23:27:02 np0005480824 python3.9[126564]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:03 np0005480824 python3.9[126716]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:27:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:27:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:04 np0005480824 python3.9[126794]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:05 np0005480824 python3.9[126946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:27:05 np0005480824 python3.9[127024]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.wm7nfl5o recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:06 np0005480824 python3.9[127176]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:27:06 np0005480824 python3.9[127254]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:07 np0005480824 python3.9[127406]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:27:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:08 np0005480824 python3[127559]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 23:27:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:27:09 np0005480824 python3.9[127711]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:27:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:10 np0005480824 python3.9[127789]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:10 np0005480824 python3.9[127941]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:27:11 np0005480824 python3.9[128019]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:12 np0005480824 python3.9[128171]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:27:12 np0005480824 python3.9[128249]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:13 np0005480824 python3.9[128401]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:27:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:27:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:14 np0005480824 python3.9[128479]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:14 np0005480824 python3.9[128631]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:27:15 np0005480824 python3.9[128709]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:16 np0005480824 python3.9[128861]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:27:17 np0005480824 python3.9[129016]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:18 np0005480824 python3.9[129168]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:27:19 np0005480824 python3.9[129320]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:19 np0005480824 python3.9[129472]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 23:27:20 np0005480824 python3.9[129624]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 23:27:21 np0005480824 systemd[1]: session-40.scope: Deactivated successfully.
Oct 10 23:27:21 np0005480824 systemd[1]: session-40.scope: Consumed 34.108s CPU time.
Oct 10 23:27:21 np0005480824 systemd-logind[782]: Session 40 logged out. Waiting for processes to exit.
Oct 10 23:27:21 np0005480824 systemd-logind[782]: Removed session 40.
Oct 10 23:27:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:27:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:26 np0005480824 systemd-logind[782]: New session 41 of user zuul.
Oct 10 23:27:26 np0005480824 systemd[1]: Started Session 41 of User zuul.
Oct 10 23:27:27 np0005480824 python3.9[129804]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:27:27
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', '.mgr', 'volumes', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control']
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:27:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:28 np0005480824 python3.9[129956]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:27:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:27:29 np0005480824 python3.9[130110]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 10 23:27:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:30 np0005480824 python3.9[130262]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.b04abwu5 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:27:31 np0005480824 python3.9[130387]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.b04abwu5 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153249.7217827-44-87941461894612/.source.b04abwu5 _original_basename=.pf7sq40f follow=False checksum=823ccd4c7d1ead2f1667dbbc221f40214ab5f536 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:32 np0005480824 python3.9[130539]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:27:32 np0005480824 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 23:27:33 np0005480824 python3.9[130693]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuaghxuFn5N7g3goz6jbrCuMuntUZ/KPqqCfNc3GmoqpkCGnl9cL4t+DrEpTfDHAfkLeeRF9uL85ptfxRqGgNSiyvd6ROXYbkubfKL7ihbFefj28MUgBmxXyN6dLZJe5ctDokqTrz5xUs68UD7AX98wjV0CjvdN053AKQKgnIaXFC9GnKf7JFFGofUOHHFAyplUr5NLa7vMmueq5s8/BJji3itNm/SZhxGRrmnrIO8c7OyNz7mtHSx4jw67bT1IGMRXaB3lT36FavxSG9pVIIf5Z9C8ejT/CDdOqLyCPx4DilkmI9vESmmtizkmNkIJH4vli9DPR17VJQlsoiSX+1KhuYZFoNDapfW2LRZ3NZp+OBFrMhurnRRU4RW7/mU4jDioVC36a6Pd6lfacE1Ry0QxWpdnf4lA9VIQy4NFp/Lx8OLZHy4i+LVHUWYE68hPIvWV2Gi5FscGqT0LSnv3jo/1ZIAhcd2bAGcooManLGpZU3BYHmJQrn0Yu5iRIJr5I0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFaeUBbzAX9xKqQNRO4zBxAap0/KOun2IfzZdcCA8z0M#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMTXIERBPc8ILYMg5XePo7yQXX+O1LhwShKOskfgLVi04dlPv7WSDSt52XOdokKAKFBaRrtFt4Sftp0eim5u/R0=#012 create=True mode=0644 path=/tmp/ansible.b04abwu5 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:27:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:34 np0005480824 python3.9[130845]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.b04abwu5' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:27:34 np0005480824 python3.9[130999]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.b04abwu5 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:35 np0005480824 systemd-logind[782]: Session 41 logged out. Waiting for processes to exit.
Oct 10 23:27:35 np0005480824 systemd[1]: session-41.scope: Deactivated successfully.
Oct 10 23:27:35 np0005480824 systemd[1]: session-41.scope: Consumed 5.877s CPU time.
Oct 10 23:27:35 np0005480824 systemd-logind[782]: Removed session 41.
Oct 10 23:27:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:27:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:27:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:41 np0005480824 systemd-logind[782]: New session 42 of user zuul.
Oct 10 23:27:41 np0005480824 systemd[1]: Started Session 42 of User zuul.
Oct 10 23:27:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:42 np0005480824 python3.9[131177]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:27:43 np0005480824 python3.9[131333]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 10 23:27:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:27:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:44 np0005480824 python3.9[131487]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:27:45 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 0af4a264-6e8a-4be5-8856-91ae3bf80edb does not exist
Oct 10 23:27:45 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev e559b896-a072-4c28-ab17-32253c00ddd9 does not exist
Oct 10 23:27:45 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 0815850c-fcb7-496b-8ff9-0960de520ac9 does not exist
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:27:45 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:27:45 np0005480824 python3.9[131796]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:27:45 np0005480824 podman[131956]: 2025-10-11 03:27:45.835045898 +0000 UTC m=+0.056353934 container create 66a1feed8ee0c80a32423397a681365266807e28b7e86525e7dc5c26cd56644e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:27:45 np0005480824 systemd[1]: Started libpod-conmon-66a1feed8ee0c80a32423397a681365266807e28b7e86525e7dc5c26cd56644e.scope.
Oct 10 23:27:45 np0005480824 podman[131956]: 2025-10-11 03:27:45.814770989 +0000 UTC m=+0.036079055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:27:45 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:27:45 np0005480824 podman[131956]: 2025-10-11 03:27:45.932737936 +0000 UTC m=+0.154045992 container init 66a1feed8ee0c80a32423397a681365266807e28b7e86525e7dc5c26cd56644e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 23:27:45 np0005480824 podman[131956]: 2025-10-11 03:27:45.945160769 +0000 UTC m=+0.166468795 container start 66a1feed8ee0c80a32423397a681365266807e28b7e86525e7dc5c26cd56644e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:27:45 np0005480824 podman[131956]: 2025-10-11 03:27:45.948802657 +0000 UTC m=+0.170110723 container attach 66a1feed8ee0c80a32423397a681365266807e28b7e86525e7dc5c26cd56644e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 10 23:27:45 np0005480824 pedantic_kare[132006]: 167 167
Oct 10 23:27:45 np0005480824 systemd[1]: libpod-66a1feed8ee0c80a32423397a681365266807e28b7e86525e7dc5c26cd56644e.scope: Deactivated successfully.
Oct 10 23:27:45 np0005480824 conmon[132006]: conmon 66a1feed8ee0c80a3242 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66a1feed8ee0c80a32423397a681365266807e28b7e86525e7dc5c26cd56644e.scope/container/memory.events
Oct 10 23:27:45 np0005480824 podman[131956]: 2025-10-11 03:27:45.954712262 +0000 UTC m=+0.176020298 container died 66a1feed8ee0c80a32423397a681365266807e28b7e86525e7dc5c26cd56644e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:27:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:45 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3abdb87f4f5a9932b2eb2797f293b096e53378044d020d11cfcf6f1ad48e87e3-merged.mount: Deactivated successfully.
Oct 10 23:27:46 np0005480824 podman[131956]: 2025-10-11 03:27:46.012027345 +0000 UTC m=+0.233335401 container remove 66a1feed8ee0c80a32423397a681365266807e28b7e86525e7dc5c26cd56644e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:27:46 np0005480824 systemd[1]: libpod-conmon-66a1feed8ee0c80a32423397a681365266807e28b7e86525e7dc5c26cd56644e.scope: Deactivated successfully.
Oct 10 23:27:46 np0005480824 podman[132054]: 2025-10-11 03:27:46.2333221 +0000 UTC m=+0.065654591 container create 42d98096f557286af66c0bdb0fdee05e9a9425e99328c38ee365e6287c699c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:27:46 np0005480824 systemd[1]: Started libpod-conmon-42d98096f557286af66c0bdb0fdee05e9a9425e99328c38ee365e6287c699c27.scope.
Oct 10 23:27:46 np0005480824 podman[132054]: 2025-10-11 03:27:46.207978993 +0000 UTC m=+0.040311554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:27:46 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:27:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b374e13dfa63a0f7e9c64e80c047fbcd0ad4c8e29e16bbd5c1a45ae80fef27e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:27:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b374e13dfa63a0f7e9c64e80c047fbcd0ad4c8e29e16bbd5c1a45ae80fef27e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:27:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b374e13dfa63a0f7e9c64e80c047fbcd0ad4c8e29e16bbd5c1a45ae80fef27e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:27:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b374e13dfa63a0f7e9c64e80c047fbcd0ad4c8e29e16bbd5c1a45ae80fef27e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:27:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b374e13dfa63a0f7e9c64e80c047fbcd0ad4c8e29e16bbd5c1a45ae80fef27e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:27:46 np0005480824 podman[132054]: 2025-10-11 03:27:46.373095739 +0000 UTC m=+0.205428290 container init 42d98096f557286af66c0bdb0fdee05e9a9425e99328c38ee365e6287c699c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:27:46 np0005480824 podman[132054]: 2025-10-11 03:27:46.386634985 +0000 UTC m=+0.218967506 container start 42d98096f557286af66c0bdb0fdee05e9a9425e99328c38ee365e6287c699c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:27:46 np0005480824 podman[132054]: 2025-10-11 03:27:46.390993728 +0000 UTC m=+0.223326239 container attach 42d98096f557286af66c0bdb0fdee05e9a9425e99328c38ee365e6287c699c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 23:27:46 np0005480824 python3.9[132126]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:27:47 np0005480824 sweet_moore[132118]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:27:47 np0005480824 sweet_moore[132118]: --> relative data size: 1.0
Oct 10 23:27:47 np0005480824 sweet_moore[132118]: --> All data devices are unavailable
Oct 10 23:27:47 np0005480824 systemd[1]: libpod-42d98096f557286af66c0bdb0fdee05e9a9425e99328c38ee365e6287c699c27.scope: Deactivated successfully.
Oct 10 23:27:47 np0005480824 podman[132054]: 2025-10-11 03:27:47.577972647 +0000 UTC m=+1.410305128 container died 42d98096f557286af66c0bdb0fdee05e9a9425e99328c38ee365e6287c699c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:27:47 np0005480824 systemd[1]: libpod-42d98096f557286af66c0bdb0fdee05e9a9425e99328c38ee365e6287c699c27.scope: Consumed 1.123s CPU time.
Oct 10 23:27:47 np0005480824 systemd[1]: var-lib-containers-storage-overlay-b374e13dfa63a0f7e9c64e80c047fbcd0ad4c8e29e16bbd5c1a45ae80fef27e9-merged.mount: Deactivated successfully.
Oct 10 23:27:47 np0005480824 podman[132054]: 2025-10-11 03:27:47.65940238 +0000 UTC m=+1.491734881 container remove 42d98096f557286af66c0bdb0fdee05e9a9425e99328c38ee365e6287c699c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:27:47 np0005480824 systemd[1]: libpod-conmon-42d98096f557286af66c0bdb0fdee05e9a9425e99328c38ee365e6287c699c27.scope: Deactivated successfully.
Oct 10 23:27:47 np0005480824 python3.9[132303]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:27:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:48 np0005480824 systemd[1]: session-42.scope: Deactivated successfully.
Oct 10 23:27:48 np0005480824 systemd[1]: session-42.scope: Consumed 4.617s CPU time.
Oct 10 23:27:48 np0005480824 systemd-logind[782]: Session 42 logged out. Waiting for processes to exit.
Oct 10 23:27:48 np0005480824 systemd-logind[782]: Removed session 42.
Oct 10 23:27:48 np0005480824 podman[132483]: 2025-10-11 03:27:48.300467962 +0000 UTC m=+0.048560988 container create c4a9b276d5d8c66ac4468e0d1d322696527c80c046124a2bb8d4436540c17596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 23:27:48 np0005480824 systemd[1]: Started libpod-conmon-c4a9b276d5d8c66ac4468e0d1d322696527c80c046124a2bb8d4436540c17596.scope.
Oct 10 23:27:48 np0005480824 podman[132483]: 2025-10-11 03:27:48.280336397 +0000 UTC m=+0.028429443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:27:48 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:27:48 np0005480824 podman[132483]: 2025-10-11 03:27:48.40944149 +0000 UTC m=+0.157534586 container init c4a9b276d5d8c66ac4468e0d1d322696527c80c046124a2bb8d4436540c17596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:27:48 np0005480824 podman[132483]: 2025-10-11 03:27:48.424159992 +0000 UTC m=+0.172253038 container start c4a9b276d5d8c66ac4468e0d1d322696527c80c046124a2bb8d4436540c17596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 10 23:27:48 np0005480824 podman[132483]: 2025-10-11 03:27:48.428461533 +0000 UTC m=+0.176554629 container attach c4a9b276d5d8c66ac4468e0d1d322696527c80c046124a2bb8d4436540c17596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 23:27:48 np0005480824 heuristic_burnell[132499]: 167 167
Oct 10 23:27:48 np0005480824 systemd[1]: libpod-c4a9b276d5d8c66ac4468e0d1d322696527c80c046124a2bb8d4436540c17596.scope: Deactivated successfully.
Oct 10 23:27:48 np0005480824 podman[132483]: 2025-10-11 03:27:48.433814756 +0000 UTC m=+0.181907842 container died c4a9b276d5d8c66ac4468e0d1d322696527c80c046124a2bb8d4436540c17596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct 10 23:27:48 np0005480824 systemd[1]: var-lib-containers-storage-overlay-76a2ea74b9cde80bbb5e9fcd546ec3937b82ee982dae1cd4c49ffa4e1aff80be-merged.mount: Deactivated successfully.
Oct 10 23:27:48 np0005480824 podman[132483]: 2025-10-11 03:27:48.487209716 +0000 UTC m=+0.235302752 container remove c4a9b276d5d8c66ac4468e0d1d322696527c80c046124a2bb8d4436540c17596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 23:27:48 np0005480824 systemd[1]: libpod-conmon-c4a9b276d5d8c66ac4468e0d1d322696527c80c046124a2bb8d4436540c17596.scope: Deactivated successfully.
Oct 10 23:27:48 np0005480824 podman[132523]: 2025-10-11 03:27:48.720557856 +0000 UTC m=+0.051519701 container create ae69c780505341f980cb468efbe45150ba4064d44f3db23f813f66ba1e4eaab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:27:48 np0005480824 systemd[1]: Started libpod-conmon-ae69c780505341f980cb468efbe45150ba4064d44f3db23f813f66ba1e4eaab3.scope.
Oct 10 23:27:48 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:27:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5327e153a996b9a4b89c6fe606ebfec28caac867047b0a2b73478923f6c72ed8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:27:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5327e153a996b9a4b89c6fe606ebfec28caac867047b0a2b73478923f6c72ed8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:27:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5327e153a996b9a4b89c6fe606ebfec28caac867047b0a2b73478923f6c72ed8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:27:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5327e153a996b9a4b89c6fe606ebfec28caac867047b0a2b73478923f6c72ed8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:27:48 np0005480824 podman[132523]: 2025-10-11 03:27:48.700577963 +0000 UTC m=+0.031539848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:27:48 np0005480824 podman[132523]: 2025-10-11 03:27:48.810073171 +0000 UTC m=+0.141035066 container init ae69c780505341f980cb468efbe45150ba4064d44f3db23f813f66ba1e4eaab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:27:48 np0005480824 podman[132523]: 2025-10-11 03:27:48.816989458 +0000 UTC m=+0.147951293 container start ae69c780505341f980cb468efbe45150ba4064d44f3db23f813f66ba1e4eaab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 23:27:48 np0005480824 podman[132523]: 2025-10-11 03:27:48.820114543 +0000 UTC m=+0.151076388 container attach ae69c780505341f980cb468efbe45150ba4064d44f3db23f813f66ba1e4eaab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:27:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]: {
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:    "0": [
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:        {
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "devices": [
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "/dev/loop3"
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            ],
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_name": "ceph_lv0",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_size": "21470642176",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "name": "ceph_lv0",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "tags": {
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.cluster_name": "ceph",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.crush_device_class": "",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.encrypted": "0",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.osd_id": "0",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.type": "block",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.vdo": "0"
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            },
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "type": "block",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "vg_name": "ceph_vg0"
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:        }
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:    ],
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:    "1": [
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:        {
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "devices": [
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "/dev/loop4"
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            ],
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_name": "ceph_lv1",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_size": "21470642176",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "name": "ceph_lv1",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "tags": {
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.cluster_name": "ceph",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.crush_device_class": "",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.encrypted": "0",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.osd_id": "1",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.type": "block",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.vdo": "0"
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            },
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "type": "block",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "vg_name": "ceph_vg1"
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:        }
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:    ],
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:    "2": [
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:        {
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "devices": [
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "/dev/loop5"
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            ],
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_name": "ceph_lv2",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_size": "21470642176",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "name": "ceph_lv2",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "tags": {
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.cluster_name": "ceph",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.crush_device_class": "",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.encrypted": "0",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.osd_id": "2",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.type": "block",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:                "ceph.vdo": "0"
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            },
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "type": "block",
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:            "vg_name": "ceph_vg2"
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:        }
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]:    ]
Oct 10 23:27:49 np0005480824 suspicious_banach[132540]: }
Oct 10 23:27:49 np0005480824 systemd[1]: libpod-ae69c780505341f980cb468efbe45150ba4064d44f3db23f813f66ba1e4eaab3.scope: Deactivated successfully.
Oct 10 23:27:49 np0005480824 podman[132523]: 2025-10-11 03:27:49.517402736 +0000 UTC m=+0.848364611 container died ae69c780505341f980cb468efbe45150ba4064d44f3db23f813f66ba1e4eaab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 10 23:27:49 np0005480824 systemd[1]: var-lib-containers-storage-overlay-5327e153a996b9a4b89c6fe606ebfec28caac867047b0a2b73478923f6c72ed8-merged.mount: Deactivated successfully.
Oct 10 23:27:49 np0005480824 podman[132523]: 2025-10-11 03:27:49.597532033 +0000 UTC m=+0.928493908 container remove ae69c780505341f980cb468efbe45150ba4064d44f3db23f813f66ba1e4eaab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 10 23:27:49 np0005480824 systemd[1]: libpod-conmon-ae69c780505341f980cb468efbe45150ba4064d44f3db23f813f66ba1e4eaab3.scope: Deactivated successfully.
Oct 10 23:27:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:50 np0005480824 podman[132702]: 2025-10-11 03:27:50.390874748 +0000 UTC m=+0.069737078 container create 51903e780182f97db420e50ff69f146263230cc03801aef734e3b232602c2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 23:27:50 np0005480824 systemd[1]: Started libpod-conmon-51903e780182f97db420e50ff69f146263230cc03801aef734e3b232602c2779.scope.
Oct 10 23:27:50 np0005480824 podman[132702]: 2025-10-11 03:27:50.363754884 +0000 UTC m=+0.042617274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:27:50 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:27:50 np0005480824 podman[132702]: 2025-10-11 03:27:50.493868329 +0000 UTC m=+0.172730649 container init 51903e780182f97db420e50ff69f146263230cc03801aef734e3b232602c2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 10 23:27:50 np0005480824 podman[132702]: 2025-10-11 03:27:50.507670551 +0000 UTC m=+0.186532851 container start 51903e780182f97db420e50ff69f146263230cc03801aef734e3b232602c2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:27:50 np0005480824 podman[132702]: 2025-10-11 03:27:50.511717547 +0000 UTC m=+0.190579917 container attach 51903e780182f97db420e50ff69f146263230cc03801aef734e3b232602c2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 10 23:27:50 np0005480824 gallant_heyrovsky[132719]: 167 167
Oct 10 23:27:50 np0005480824 systemd[1]: libpod-51903e780182f97db420e50ff69f146263230cc03801aef734e3b232602c2779.scope: Deactivated successfully.
Oct 10 23:27:50 np0005480824 podman[132702]: 2025-10-11 03:27:50.515555477 +0000 UTC m=+0.194417807 container died 51903e780182f97db420e50ff69f146263230cc03801aef734e3b232602c2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:27:50 np0005480824 systemd[1]: var-lib-containers-storage-overlay-32e874c58e9cbd693ae5f85de79bbe5cf5f296c8b9e348e53c52c95cbecef70a-merged.mount: Deactivated successfully.
Oct 10 23:27:50 np0005480824 podman[132702]: 2025-10-11 03:27:50.566159528 +0000 UTC m=+0.245021848 container remove 51903e780182f97db420e50ff69f146263230cc03801aef734e3b232602c2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:27:50 np0005480824 systemd[1]: libpod-conmon-51903e780182f97db420e50ff69f146263230cc03801aef734e3b232602c2779.scope: Deactivated successfully.
Oct 10 23:27:50 np0005480824 podman[132743]: 2025-10-11 03:27:50.793968712 +0000 UTC m=+0.057594691 container create 9552420df6aaa769f63827cea230cb018d883a6188d59687b6d144474778efd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dhawan, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 10 23:27:50 np0005480824 systemd[1]: Started libpod-conmon-9552420df6aaa769f63827cea230cb018d883a6188d59687b6d144474778efd1.scope.
Oct 10 23:27:50 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:27:50 np0005480824 podman[132743]: 2025-10-11 03:27:50.77689227 +0000 UTC m=+0.040518259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:27:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6bdea7b11944c63bf15bef11757fbcf2508674558a7c320a2f25dd5e18c353/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:27:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6bdea7b11944c63bf15bef11757fbcf2508674558a7c320a2f25dd5e18c353/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:27:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6bdea7b11944c63bf15bef11757fbcf2508674558a7c320a2f25dd5e18c353/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:27:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6bdea7b11944c63bf15bef11757fbcf2508674558a7c320a2f25dd5e18c353/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:27:50 np0005480824 podman[132743]: 2025-10-11 03:27:50.894821347 +0000 UTC m=+0.158447406 container init 9552420df6aaa769f63827cea230cb018d883a6188d59687b6d144474778efd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dhawan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 23:27:50 np0005480824 podman[132743]: 2025-10-11 03:27:50.906476214 +0000 UTC m=+0.170102213 container start 9552420df6aaa769f63827cea230cb018d883a6188d59687b6d144474778efd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 10 23:27:50 np0005480824 podman[132743]: 2025-10-11 03:27:50.909952557 +0000 UTC m=+0.173578556 container attach 9552420df6aaa769f63827cea230cb018d883a6188d59687b6d144474778efd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]: {
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "osd_id": 0,
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "type": "bluestore"
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:    },
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "osd_id": 1,
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "type": "bluestore"
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:    },
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "osd_id": 2,
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:        "type": "bluestore"
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]:    }
Oct 10 23:27:51 np0005480824 compassionate_dhawan[132759]: }
Oct 10 23:27:51 np0005480824 systemd[1]: libpod-9552420df6aaa769f63827cea230cb018d883a6188d59687b6d144474778efd1.scope: Deactivated successfully.
Oct 10 23:27:51 np0005480824 systemd[1]: libpod-9552420df6aaa769f63827cea230cb018d883a6188d59687b6d144474778efd1.scope: Consumed 1.046s CPU time.
Oct 10 23:27:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:51 np0005480824 podman[132792]: 2025-10-11 03:27:51.992784151 +0000 UTC m=+0.029065785 container died 9552420df6aaa769f63827cea230cb018d883a6188d59687b6d144474778efd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:27:52 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0d6bdea7b11944c63bf15bef11757fbcf2508674558a7c320a2f25dd5e18c353-merged.mount: Deactivated successfully.
Oct 10 23:27:52 np0005480824 podman[132792]: 2025-10-11 03:27:52.049424831 +0000 UTC m=+0.085706495 container remove 9552420df6aaa769f63827cea230cb018d883a6188d59687b6d144474778efd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dhawan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 10 23:27:52 np0005480824 systemd[1]: libpod-conmon-9552420df6aaa769f63827cea230cb018d883a6188d59687b6d144474778efd1.scope: Deactivated successfully.
Oct 10 23:27:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:27:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:27:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:27:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:27:52 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 0aab2d86-4550-45b2-8dfa-57da35aacccd does not exist
Oct 10 23:27:52 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 0f2f75fc-c28a-407b-9ef5-ff9182ee51cb does not exist
Oct 10 23:27:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:27:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:27:53 np0005480824 systemd[1]: session-18.scope: Deactivated successfully.
Oct 10 23:27:53 np0005480824 systemd[1]: session-18.scope: Consumed 1min 30.659s CPU time.
Oct 10 23:27:53 np0005480824 systemd-logind[782]: Session 18 logged out. Waiting for processes to exit.
Oct 10 23:27:53 np0005480824 systemd-logind[782]: Removed session 18.
Oct 10 23:27:53 np0005480824 systemd-logind[782]: New session 43 of user zuul.
Oct 10 23:27:53 np0005480824 systemd[1]: Started Session 43 of User zuul.
Oct 10 23:27:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:27:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:54 np0005480824 python3.9[133008]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:27:55 np0005480824 python3.9[133164]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:27:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:56 np0005480824 python3.9[133248]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 23:27:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:27:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:27:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:27:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:27:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:27:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:27:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:27:58 np0005480824 python3.9[133399]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:27:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:27:59 np0005480824 python3.9[133550]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 23:27:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:00 np0005480824 python3.9[133700]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:28:01 np0005480824 python3.9[133850]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:28:01 np0005480824 systemd[1]: session-43.scope: Deactivated successfully.
Oct 10 23:28:01 np0005480824 systemd[1]: session-43.scope: Consumed 6.002s CPU time.
Oct 10 23:28:01 np0005480824 systemd-logind[782]: Session 43 logged out. Waiting for processes to exit.
Oct 10 23:28:01 np0005480824 systemd-logind[782]: Removed session 43.
Oct 10 23:28:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:28:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:06 np0005480824 systemd-logind[782]: New session 44 of user zuul.
Oct 10 23:28:06 np0005480824 systemd[1]: Started Session 44 of User zuul.
Oct 10 23:28:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:08 np0005480824 python3.9[134028]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:28:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:28:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:10 np0005480824 python3.9[134184]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:28:10 np0005480824 python3.9[134336]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:28:11 np0005480824 python3.9[134488]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:12 np0005480824 python3.9[134611]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153291.0072405-65-276814180015774/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e1b5be145af7e654866981cb447ca95c68947f1c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:13 np0005480824 python3.9[134763]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:28:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:14 np0005480824 python3.9[134886]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153292.8229587-65-213579529890604/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=dd7cadbc7072b35ce79ae35366137079ebdc1368 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:14 np0005480824 python3.9[135038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:15 np0005480824 python3.9[135161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153294.2720807-65-260870390188303/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=64a11f85593b88a7c9c1ebb567f42bb76c7dbc92 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:16 np0005480824 python3.9[135313]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:28:17 np0005480824 python3.9[135465]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:28:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:18 np0005480824 python3.9[135617]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:18 np0005480824 python3.9[135740]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153297.4861372-124-51028200126112/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=70091bf5a246e6077b3b02ab1e289f2ef8c6c8c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:18.940142) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153298940198, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1586, "num_deletes": 251, "total_data_size": 2345871, "memory_usage": 2381608, "flush_reason": "Manual Compaction"}
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153298950397, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1365597, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7359, "largest_seqno": 8944, "table_properties": {"data_size": 1360313, "index_size": 2360, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14996, "raw_average_key_size": 20, "raw_value_size": 1347995, "raw_average_value_size": 1851, "num_data_blocks": 111, "num_entries": 728, "num_filter_entries": 728, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153143, "oldest_key_time": 1760153143, "file_creation_time": 1760153298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 10302 microseconds, and 3958 cpu microseconds.
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:18.950451) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1365597 bytes OK
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:18.950468) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:18.952239) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:18.952250) EVENT_LOG_v1 {"time_micros": 1760153298952246, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:18.952265) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2338790, prev total WAL file size 2338790, number of live WAL files 2.
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:18.953030) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1333KB)], [20(7026KB)]
Oct 10 23:28:18 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153298953193, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8560341, "oldest_snapshot_seqno": -1}
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3362 keys, 6791814 bytes, temperature: kUnknown
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153299001004, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6791814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6765918, "index_size": 16379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 80661, "raw_average_key_size": 23, "raw_value_size": 6701735, "raw_average_value_size": 1993, "num_data_blocks": 727, "num_entries": 3362, "num_filter_entries": 3362, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760153298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:19.001378) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6791814 bytes
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:19.003198) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.6 rd, 141.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.9 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(11.2) write-amplify(5.0) OK, records in: 3804, records dropped: 442 output_compression: NoCompression
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:19.003234) EVENT_LOG_v1 {"time_micros": 1760153299003218, "job": 6, "event": "compaction_finished", "compaction_time_micros": 47930, "compaction_time_cpu_micros": 30838, "output_level": 6, "num_output_files": 1, "total_output_size": 6791814, "num_input_records": 3804, "num_output_records": 3362, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153299003844, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153299006423, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:18.952896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:19.006481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:19.006487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:19.006489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:19.006491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:28:19 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:28:19.006493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:28:19 np0005480824 python3.9[135892]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:20 np0005480824 python3.9[136015]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153299.0231545-124-3018494219069/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=5ce9383de25711fcd385b2d80e30a50235da1fbd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:20 np0005480824 python3.9[136167]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:21 np0005480824 python3.9[136290]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153300.3591065-124-67976549519679/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=9fc76c89f62ccfbd1efc8842511df0fd073efbeb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:22 np0005480824 python3.9[136442]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:28:23 np0005480824 python3.9[136594]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:28:23 np0005480824 python3.9[136746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:28:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:24 np0005480824 python3.9[136869]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153303.2835822-183-150461710517999/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7ea7b0dafc72454a3c59eae3da84616f3b304ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:25 np0005480824 python3.9[137021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:25 np0005480824 python3.9[137144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153304.7582989-183-193365053724684/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=5ce9383de25711fcd385b2d80e30a50235da1fbd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:26 np0005480824 python3.9[137296]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:27 np0005480824 python3.9[137419]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153306.090161-183-242540193328818/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=eaa68645fe2a234f8e28b770d7225e3b08a57e79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:28:27
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'backups', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr']
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:28:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:28 np0005480824 python3.9[137571]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:28:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:28:29 np0005480824 python3.9[137723]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:29 np0005480824 python3.9[137846]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153308.6933663-251-41095173758939/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=014b1ef6a5f22a009f711144013b78e6d26cdf65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:30 np0005480824 python3.9[137998]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:28:31 np0005480824 python3.9[138150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:32 np0005480824 python3.9[138273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153310.8866615-275-197710358950507/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=014b1ef6a5f22a009f711144013b78e6d26cdf65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:32 np0005480824 python3.9[138425]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:28:33 np0005480824 python3.9[138577]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:28:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:34 np0005480824 python3.9[138700]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153313.1340897-299-213750223389623/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=014b1ef6a5f22a009f711144013b78e6d26cdf65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:35 np0005480824 python3.9[138852]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:28:35 np0005480824 python3.9[139004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:36 np0005480824 python3.9[139127]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153315.3609986-323-45327144747460/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=014b1ef6a5f22a009f711144013b78e6d26cdf65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:37 np0005480824 python3.9[139279]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:28:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:38 np0005480824 python3.9[139433]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:28:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2010 writes, 8946 keys, 2010 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2010 writes, 2010 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2010 writes, 8946 keys, 2010 commit groups, 1.0 writes per commit group, ingest: 10.90 MB, 0.02 MB/s#012Interval WAL: 2010 writes, 2010 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    109.1      0.08              0.02         3    0.025       0      0       0.0       0.0#012  L6      1/0    6.48 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    144.9    128.2      0.10              0.06         2    0.052    7170    732       0.0       0.0#012 Sum      1/0    6.48 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     84.1    120.2      0.18              0.08         5    0.036    7170    732       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     86.1    122.9      0.17              0.08         4    0.044    7170    732       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    144.9    128.2      0.10              0.06         2    0.052    7170    732       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    115.0      0.07              0.02         2    0.035       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5617dbc851f0#2 capacity: 308.00 MB usage: 553.41 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(37,465.69 KB,0.147654%) FilterBlock(6,28.55 KB,0.00905124%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 10 23:28:38 np0005480824 python3.9[139556]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153317.7871585-347-29013827803642/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=014b1ef6a5f22a009f711144013b78e6d26cdf65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:28:39 np0005480824 python3.9[139708]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:28:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:40 np0005480824 python3.9[139860]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:40 np0005480824 python3.9[139983]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153319.82351-371-75791148182229/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=014b1ef6a5f22a009f711144013b78e6d26cdf65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:41 np0005480824 systemd[1]: session-44.scope: Deactivated successfully.
Oct 10 23:28:41 np0005480824 systemd[1]: session-44.scope: Consumed 27.582s CPU time.
Oct 10 23:28:41 np0005480824 systemd-logind[782]: Session 44 logged out. Waiting for processes to exit.
Oct 10 23:28:41 np0005480824 systemd-logind[782]: Removed session 44.
Oct 10 23:28:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:28:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:46 np0005480824 systemd-logind[782]: New session 45 of user zuul.
Oct 10 23:28:46 np0005480824 systemd[1]: Started Session 45 of User zuul.
Oct 10 23:28:47 np0005480824 python3.9[140163]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:48 np0005480824 python3.9[140315]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:28:49 np0005480824 python3.9[140438]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153327.7824957-34-42665725321167/.source.conf _original_basename=ceph.conf follow=False checksum=8df7a873239877171b4da44927e29cf689b115b8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:50 np0005480824 python3.9[140590]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:28:50 np0005480824 python3.9[140713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153329.5413842-34-274840746343731/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=3cf916d5f4489610cb8b254ce9c8bcc669faf03d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:28:51 np0005480824 systemd[1]: session-45.scope: Deactivated successfully.
Oct 10 23:28:51 np0005480824 systemd[1]: session-45.scope: Consumed 3.218s CPU time.
Oct 10 23:28:51 np0005480824 systemd-logind[782]: Session 45 logged out. Waiting for processes to exit.
Oct 10 23:28:51 np0005480824 systemd-logind[782]: Removed session 45.
Oct 10 23:28:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:28:53 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 62d3ea36-74e5-4148-9fb5-e7ce66c4136e does not exist
Oct 10 23:28:53 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 1f13e386-5a4c-47e9-8d5c-e08a055183dd does not exist
Oct 10 23:28:53 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 52387cd6-bcc1-4a0e-83dc-3414615c857f does not exist
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:28:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:28:53 np0005480824 podman[141010]: 2025-10-11 03:28:53.94358189 +0000 UTC m=+0.050099604 container create 62f2c9685fdbedf59ed1996c62a0ca3072c52b42641b6f63dbedb023a9fc0775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_maxwell, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 23:28:53 np0005480824 systemd[1]: Started libpod-conmon-62f2c9685fdbedf59ed1996c62a0ca3072c52b42641b6f63dbedb023a9fc0775.scope.
Oct 10 23:28:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:54 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:28:54 np0005480824 podman[141010]: 2025-10-11 03:28:53.916758828 +0000 UTC m=+0.023276622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:28:54 np0005480824 podman[141010]: 2025-10-11 03:28:54.014919019 +0000 UTC m=+0.121436763 container init 62f2c9685fdbedf59ed1996c62a0ca3072c52b42641b6f63dbedb023a9fc0775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_maxwell, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:28:54 np0005480824 podman[141010]: 2025-10-11 03:28:54.021241601 +0000 UTC m=+0.127759315 container start 62f2c9685fdbedf59ed1996c62a0ca3072c52b42641b6f63dbedb023a9fc0775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:28:54 np0005480824 podman[141010]: 2025-10-11 03:28:54.024926374 +0000 UTC m=+0.131444128 container attach 62f2c9685fdbedf59ed1996c62a0ca3072c52b42641b6f63dbedb023a9fc0775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 23:28:54 np0005480824 relaxed_maxwell[141027]: 167 167
Oct 10 23:28:54 np0005480824 systemd[1]: libpod-62f2c9685fdbedf59ed1996c62a0ca3072c52b42641b6f63dbedb023a9fc0775.scope: Deactivated successfully.
Oct 10 23:28:54 np0005480824 conmon[141027]: conmon 62f2c9685fdbedf59ed1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62f2c9685fdbedf59ed1996c62a0ca3072c52b42641b6f63dbedb023a9fc0775.scope/container/memory.events
Oct 10 23:28:54 np0005480824 podman[141010]: 2025-10-11 03:28:54.027473771 +0000 UTC m=+0.133991515 container died 62f2c9685fdbedf59ed1996c62a0ca3072c52b42641b6f63dbedb023a9fc0775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_maxwell, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:28:54 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e20917c2d05e7eb6c56fb22a90e0f88e22d5c1668c6f270b3ece9abb8e7ebb92-merged.mount: Deactivated successfully.
Oct 10 23:28:54 np0005480824 podman[141010]: 2025-10-11 03:28:54.073458801 +0000 UTC m=+0.179976545 container remove 62f2c9685fdbedf59ed1996c62a0ca3072c52b42641b6f63dbedb023a9fc0775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_maxwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 10 23:28:54 np0005480824 systemd[1]: libpod-conmon-62f2c9685fdbedf59ed1996c62a0ca3072c52b42641b6f63dbedb023a9fc0775.scope: Deactivated successfully.
Oct 10 23:28:54 np0005480824 podman[141051]: 2025-10-11 03:28:54.30536254 +0000 UTC m=+0.067934654 container create 4c3ce03a276e96634b9c2dcbbe2e940a60ac58f052e790fd1e856fe455cde0d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 23:28:54 np0005480824 systemd[1]: Started libpod-conmon-4c3ce03a276e96634b9c2dcbbe2e940a60ac58f052e790fd1e856fe455cde0d9.scope.
Oct 10 23:28:54 np0005480824 podman[141051]: 2025-10-11 03:28:54.276671907 +0000 UTC m=+0.039244061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:28:54 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:28:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb67ba2b674aadabe375fe53be802292bac6689c275a988e577afb66812c73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:28:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb67ba2b674aadabe375fe53be802292bac6689c275a988e577afb66812c73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:28:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb67ba2b674aadabe375fe53be802292bac6689c275a988e577afb66812c73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:28:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb67ba2b674aadabe375fe53be802292bac6689c275a988e577afb66812c73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:28:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb67ba2b674aadabe375fe53be802292bac6689c275a988e577afb66812c73/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:28:54 np0005480824 podman[141051]: 2025-10-11 03:28:54.403474879 +0000 UTC m=+0.166047033 container init 4c3ce03a276e96634b9c2dcbbe2e940a60ac58f052e790fd1e856fe455cde0d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 10 23:28:54 np0005480824 podman[141051]: 2025-10-11 03:28:54.416725407 +0000 UTC m=+0.179297481 container start 4c3ce03a276e96634b9c2dcbbe2e940a60ac58f052e790fd1e856fe455cde0d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_goldberg, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct 10 23:28:54 np0005480824 podman[141051]: 2025-10-11 03:28:54.420522911 +0000 UTC m=+0.183095065 container attach 4c3ce03a276e96634b9c2dcbbe2e940a60ac58f052e790fd1e856fe455cde0d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:28:55 np0005480824 charming_goldberg[141069]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:28:55 np0005480824 charming_goldberg[141069]: --> relative data size: 1.0
Oct 10 23:28:55 np0005480824 charming_goldberg[141069]: --> All data devices are unavailable
Oct 10 23:28:55 np0005480824 systemd[1]: libpod-4c3ce03a276e96634b9c2dcbbe2e940a60ac58f052e790fd1e856fe455cde0d9.scope: Deactivated successfully.
Oct 10 23:28:55 np0005480824 systemd[1]: libpod-4c3ce03a276e96634b9c2dcbbe2e940a60ac58f052e790fd1e856fe455cde0d9.scope: Consumed 1.191s CPU time.
Oct 10 23:28:55 np0005480824 podman[141051]: 2025-10-11 03:28:55.640776905 +0000 UTC m=+1.403349009 container died 4c3ce03a276e96634b9c2dcbbe2e940a60ac58f052e790fd1e856fe455cde0d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_goldberg, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:28:55 np0005480824 systemd[1]: var-lib-containers-storage-overlay-cebb67ba2b674aadabe375fe53be802292bac6689c275a988e577afb66812c73-merged.mount: Deactivated successfully.
Oct 10 23:28:55 np0005480824 podman[141051]: 2025-10-11 03:28:55.706774045 +0000 UTC m=+1.469346129 container remove 4c3ce03a276e96634b9c2dcbbe2e940a60ac58f052e790fd1e856fe455cde0d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_goldberg, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:28:55 np0005480824 systemd[1]: libpod-conmon-4c3ce03a276e96634b9c2dcbbe2e940a60ac58f052e790fd1e856fe455cde0d9.scope: Deactivated successfully.
Oct 10 23:28:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:56 np0005480824 systemd-logind[782]: New session 46 of user zuul.
Oct 10 23:28:56 np0005480824 systemd[1]: Started Session 46 of User zuul.
Oct 10 23:28:56 np0005480824 podman[141280]: 2025-10-11 03:28:56.550852296 +0000 UTC m=+0.056872606 container create 699575627a52e2c7b2eae7997f44bc431934bec198d6ddb1b864f4d61d69dbff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_agnesi, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:28:56 np0005480824 systemd[1]: Started libpod-conmon-699575627a52e2c7b2eae7997f44bc431934bec198d6ddb1b864f4d61d69dbff.scope.
Oct 10 23:28:56 np0005480824 podman[141280]: 2025-10-11 03:28:56.521914197 +0000 UTC m=+0.027934607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:28:56 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:28:56 np0005480824 podman[141280]: 2025-10-11 03:28:56.648403292 +0000 UTC m=+0.154423672 container init 699575627a52e2c7b2eae7997f44bc431934bec198d6ddb1b864f4d61d69dbff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:28:56 np0005480824 podman[141280]: 2025-10-11 03:28:56.660258358 +0000 UTC m=+0.166278718 container start 699575627a52e2c7b2eae7997f44bc431934bec198d6ddb1b864f4d61d69dbff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_agnesi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:28:56 np0005480824 podman[141280]: 2025-10-11 03:28:56.664081884 +0000 UTC m=+0.170102234 container attach 699575627a52e2c7b2eae7997f44bc431934bec198d6ddb1b864f4d61d69dbff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:28:56 np0005480824 sharp_agnesi[141325]: 167 167
Oct 10 23:28:56 np0005480824 systemd[1]: libpod-699575627a52e2c7b2eae7997f44bc431934bec198d6ddb1b864f4d61d69dbff.scope: Deactivated successfully.
Oct 10 23:28:56 np0005480824 podman[141280]: 2025-10-11 03:28:56.670981339 +0000 UTC m=+0.177001689 container died 699575627a52e2c7b2eae7997f44bc431934bec198d6ddb1b864f4d61d69dbff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_agnesi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:28:56 np0005480824 systemd[1]: var-lib-containers-storage-overlay-4146222833f2fae17b03795a1ba81fdb0e6cd36cb7c67f76e5204b8044d90387-merged.mount: Deactivated successfully.
Oct 10 23:28:56 np0005480824 podman[141280]: 2025-10-11 03:28:56.722167866 +0000 UTC m=+0.228188216 container remove 699575627a52e2c7b2eae7997f44bc431934bec198d6ddb1b864f4d61d69dbff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:28:56 np0005480824 systemd[1]: libpod-conmon-699575627a52e2c7b2eae7997f44bc431934bec198d6ddb1b864f4d61d69dbff.scope: Deactivated successfully.
Oct 10 23:28:56 np0005480824 podman[141350]: 2025-10-11 03:28:56.922877225 +0000 UTC m=+0.050225506 container create 07f47e0a2838a36e3c3ed3e2913b7102942e0032ec60d996683a0301c66ca255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_visvesvaraya, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 23:28:56 np0005480824 systemd[1]: Started libpod-conmon-07f47e0a2838a36e3c3ed3e2913b7102942e0032ec60d996683a0301c66ca255.scope.
Oct 10 23:28:56 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:28:56 np0005480824 podman[141350]: 2025-10-11 03:28:56.90300438 +0000 UTC m=+0.030352681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:28:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2da9cb3bda3e777019f9cb3253c6c018674ed73f06d83c30b342822ed1fc25c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:28:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2da9cb3bda3e777019f9cb3253c6c018674ed73f06d83c30b342822ed1fc25c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:28:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2da9cb3bda3e777019f9cb3253c6c018674ed73f06d83c30b342822ed1fc25c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:28:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2da9cb3bda3e777019f9cb3253c6c018674ed73f06d83c30b342822ed1fc25c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:28:57 np0005480824 podman[141350]: 2025-10-11 03:28:57.011155725 +0000 UTC m=+0.138504066 container init 07f47e0a2838a36e3c3ed3e2913b7102942e0032ec60d996683a0301c66ca255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:28:57 np0005480824 podman[141350]: 2025-10-11 03:28:57.02166907 +0000 UTC m=+0.149017341 container start 07f47e0a2838a36e3c3ed3e2913b7102942e0032ec60d996683a0301c66ca255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:28:57 np0005480824 podman[141350]: 2025-10-11 03:28:57.025509356 +0000 UTC m=+0.152857687 container attach 07f47e0a2838a36e3c3ed3e2913b7102942e0032ec60d996683a0301c66ca255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:28:57 np0005480824 python3.9[141468]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]: {
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:    "0": [
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:        {
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "devices": [
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "/dev/loop3"
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            ],
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_name": "ceph_lv0",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_size": "21470642176",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "name": "ceph_lv0",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "tags": {
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.cluster_name": "ceph",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.crush_device_class": "",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.encrypted": "0",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.osd_id": "0",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.type": "block",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.vdo": "0"
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            },
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "type": "block",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "vg_name": "ceph_vg0"
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:        }
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:    ],
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:    "1": [
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:        {
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "devices": [
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "/dev/loop4"
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            ],
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_name": "ceph_lv1",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_size": "21470642176",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "name": "ceph_lv1",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "tags": {
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.cluster_name": "ceph",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.crush_device_class": "",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.encrypted": "0",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.osd_id": "1",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.type": "block",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.vdo": "0"
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            },
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "type": "block",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "vg_name": "ceph_vg1"
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:        }
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:    ],
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:    "2": [
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:        {
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "devices": [
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "/dev/loop5"
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            ],
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_name": "ceph_lv2",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_size": "21470642176",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "name": "ceph_lv2",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "tags": {
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.cluster_name": "ceph",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.crush_device_class": "",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.encrypted": "0",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.osd_id": "2",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.type": "block",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:                "ceph.vdo": "0"
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            },
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "type": "block",
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:            "vg_name": "ceph_vg2"
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:        }
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]:    ]
Oct 10 23:28:57 np0005480824 practical_visvesvaraya[141403]: }
Oct 10 23:28:57 np0005480824 systemd[1]: libpod-07f47e0a2838a36e3c3ed3e2913b7102942e0032ec60d996683a0301c66ca255.scope: Deactivated successfully.
Oct 10 23:28:57 np0005480824 podman[141350]: 2025-10-11 03:28:57.769296449 +0000 UTC m=+0.896644770 container died 07f47e0a2838a36e3c3ed3e2913b7102942e0032ec60d996683a0301c66ca255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_visvesvaraya, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 10 23:28:57 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e2da9cb3bda3e777019f9cb3253c6c018674ed73f06d83c30b342822ed1fc25c-merged.mount: Deactivated successfully.
Oct 10 23:28:57 np0005480824 podman[141350]: 2025-10-11 03:28:57.842216374 +0000 UTC m=+0.969564695 container remove 07f47e0a2838a36e3c3ed3e2913b7102942e0032ec60d996683a0301c66ca255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_visvesvaraya, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:28:57 np0005480824 systemd[1]: libpod-conmon-07f47e0a2838a36e3c3ed3e2913b7102942e0032ec60d996683a0301c66ca255.scope: Deactivated successfully.
Oct 10 23:28:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:28:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:28:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:28:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:28:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:28:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:28:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:28:58 np0005480824 podman[141784]: 2025-10-11 03:28:58.645275846 +0000 UTC m=+0.051812743 container create 8e8f85055f64fa8cf5c4c5118491023ef8f55941d34aa664eed4f32b015bda9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:28:58 np0005480824 systemd[1]: Started libpod-conmon-8e8f85055f64fa8cf5c4c5118491023ef8f55941d34aa664eed4f32b015bda9c.scope.
Oct 10 23:28:58 np0005480824 python3.9[141767]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:28:58 np0005480824 podman[141784]: 2025-10-11 03:28:58.62270613 +0000 UTC m=+0.029243047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:28:58 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:28:58 np0005480824 podman[141784]: 2025-10-11 03:28:58.755643839 +0000 UTC m=+0.162180806 container init 8e8f85055f64fa8cf5c4c5118491023ef8f55941d34aa664eed4f32b015bda9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 10 23:28:58 np0005480824 podman[141784]: 2025-10-11 03:28:58.76279997 +0000 UTC m=+0.169336877 container start 8e8f85055f64fa8cf5c4c5118491023ef8f55941d34aa664eed4f32b015bda9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:28:58 np0005480824 stoic_lumiere[141800]: 167 167
Oct 10 23:28:58 np0005480824 podman[141784]: 2025-10-11 03:28:58.767205579 +0000 UTC m=+0.173742536 container attach 8e8f85055f64fa8cf5c4c5118491023ef8f55941d34aa664eed4f32b015bda9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 10 23:28:58 np0005480824 systemd[1]: libpod-8e8f85055f64fa8cf5c4c5118491023ef8f55941d34aa664eed4f32b015bda9c.scope: Deactivated successfully.
Oct 10 23:28:58 np0005480824 podman[141784]: 2025-10-11 03:28:58.76950079 +0000 UTC m=+0.176037707 container died 8e8f85055f64fa8cf5c4c5118491023ef8f55941d34aa664eed4f32b015bda9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:28:58 np0005480824 systemd[1]: var-lib-containers-storage-overlay-8bf49c78d55e54c199423317eb107e38ea9cc108a8d4bca9237f5357bd18f2e1-merged.mount: Deactivated successfully.
Oct 10 23:28:58 np0005480824 podman[141784]: 2025-10-11 03:28:58.817913015 +0000 UTC m=+0.224449922 container remove 8e8f85055f64fa8cf5c4c5118491023ef8f55941d34aa664eed4f32b015bda9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:28:58 np0005480824 systemd[1]: libpod-conmon-8e8f85055f64fa8cf5c4c5118491023ef8f55941d34aa664eed4f32b015bda9c.scope: Deactivated successfully.
Oct 10 23:28:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:28:59 np0005480824 podman[141891]: 2025-10-11 03:28:59.006177415 +0000 UTC m=+0.046035642 container create 4713b2a497d8890497feffa722c2dff9e164aa5e268bd99a496d5765377c2d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 10 23:28:59 np0005480824 systemd[1]: Started libpod-conmon-4713b2a497d8890497feffa722c2dff9e164aa5e268bd99a496d5765377c2d6f.scope.
Oct 10 23:28:59 np0005480824 podman[141891]: 2025-10-11 03:28:58.980436198 +0000 UTC m=+0.020294415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:28:59 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:28:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b5df6e7ac2750e62716c57f9e8b81baad545557b280b7bd76574998ebbcf719/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:28:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b5df6e7ac2750e62716c57f9e8b81baad545557b280b7bd76574998ebbcf719/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:28:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b5df6e7ac2750e62716c57f9e8b81baad545557b280b7bd76574998ebbcf719/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:28:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b5df6e7ac2750e62716c57f9e8b81baad545557b280b7bd76574998ebbcf719/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:28:59 np0005480824 podman[141891]: 2025-10-11 03:28:59.116210022 +0000 UTC m=+0.156068309 container init 4713b2a497d8890497feffa722c2dff9e164aa5e268bd99a496d5765377c2d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rhodes, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 10 23:28:59 np0005480824 podman[141891]: 2025-10-11 03:28:59.13488538 +0000 UTC m=+0.174743587 container start 4713b2a497d8890497feffa722c2dff9e164aa5e268bd99a496d5765377c2d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rhodes, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:28:59 np0005480824 podman[141891]: 2025-10-11 03:28:59.138819209 +0000 UTC m=+0.178677426 container attach 4713b2a497d8890497feffa722c2dff9e164aa5e268bd99a496d5765377c2d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rhodes, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:28:59 np0005480824 python3.9[141994]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:28:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]: {
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "osd_id": 0,
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "type": "bluestore"
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:    },
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "osd_id": 1,
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "type": "bluestore"
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:    },
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "osd_id": 2,
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:        "type": "bluestore"
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]:    }
Oct 10 23:29:00 np0005480824 awesome_rhodes[141937]: }
Oct 10 23:29:00 np0005480824 systemd[1]: libpod-4713b2a497d8890497feffa722c2dff9e164aa5e268bd99a496d5765377c2d6f.scope: Deactivated successfully.
Oct 10 23:29:00 np0005480824 podman[142173]: 2025-10-11 03:29:00.143106101 +0000 UTC m=+0.029028591 container died 4713b2a497d8890497feffa722c2dff9e164aa5e268bd99a496d5765377c2d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rhodes, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:29:00 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3b5df6e7ac2750e62716c57f9e8b81baad545557b280b7bd76574998ebbcf719-merged.mount: Deactivated successfully.
Oct 10 23:29:00 np0005480824 python3.9[142155]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:29:00 np0005480824 podman[142173]: 2025-10-11 03:29:00.212549798 +0000 UTC m=+0.098472208 container remove 4713b2a497d8890497feffa722c2dff9e164aa5e268bd99a496d5765377c2d6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_rhodes, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 10 23:29:00 np0005480824 systemd[1]: libpod-conmon-4713b2a497d8890497feffa722c2dff9e164aa5e268bd99a496d5765377c2d6f.scope: Deactivated successfully.
Oct 10 23:29:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:29:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:29:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:29:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:29:00 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 30953b07-43f0-4740-80d5-8f899f1292d3 does not exist
Oct 10 23:29:00 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 23adc559-285d-460b-a4fd-3d7444c6cd69 does not exist
Oct 10 23:29:01 np0005480824 python3.9[142390]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 10 23:29:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:29:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:29:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:02 np0005480824 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 10 23:29:03 np0005480824 python3.9[142546]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:29:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:29:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:04 np0005480824 python3.9[142630]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:29:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:06 np0005480824 python3.9[142783]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 23:29:07 np0005480824 python3[142938]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 10 23:29:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:08 np0005480824 python3.9[143090]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:29:09 np0005480824 python3.9[143242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:09 np0005480824 python3.9[143320]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:10 np0005480824 python3.9[143472]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:11 np0005480824 python3.9[143550]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.mssj1ogi recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:11 np0005480824 python3.9[143702]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:12 np0005480824 python3.9[143780]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:13 np0005480824 python3.9[143932]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:29:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:29:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:14 np0005480824 python3[144085]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 23:29:15 np0005480824 python3.9[144237]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:16 np0005480824 python3.9[144362]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153354.6529548-157-147072313215047/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:16 np0005480824 python3.9[144514]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:17 np0005480824 python3.9[144639]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153356.2365818-172-70132204129965/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:18 np0005480824 python3.9[144791]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:18 np0005480824 python3.9[144916]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153357.6573179-187-206536839348279/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:29:19 np0005480824 python3.9[145068]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:20 np0005480824 python3.9[145193]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153359.03894-202-65479212578532/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:20 np0005480824 python3.9[145345]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:21 np0005480824 python3.9[145470]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153360.3349683-217-194747629630653/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:22 np0005480824 python3.9[145622]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:22 np0005480824 python3.9[145774]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:29:23 np0005480824 python3.9[145929]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:29:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:24 np0005480824 python3.9[146081]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:29:25 np0005480824 python3.9[146235]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:29:25 np0005480824 python3.9[146389]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:29:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:26 np0005480824 python3.9[146544]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.470345) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153367470427, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 770, "num_deletes": 251, "total_data_size": 1010752, "memory_usage": 1024424, "flush_reason": "Manual Compaction"}
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153367478265, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1001752, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8945, "largest_seqno": 9714, "table_properties": {"data_size": 997818, "index_size": 1714, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8404, "raw_average_key_size": 18, "raw_value_size": 989923, "raw_average_value_size": 2190, "num_data_blocks": 80, "num_entries": 452, "num_filter_entries": 452, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153299, "oldest_key_time": 1760153299, "file_creation_time": 1760153367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 7935 microseconds, and 3124 cpu microseconds.
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.478302) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1001752 bytes OK
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.478317) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.480367) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.480379) EVENT_LOG_v1 {"time_micros": 1760153367480375, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.480397) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1006863, prev total WAL file size 1006863, number of live WAL files 2.
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.480924) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(978KB)], [23(6632KB)]
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153367481029, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7793566, "oldest_snapshot_seqno": -1}
Oct 10 23:29:27 np0005480824 python3.9[146694]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3300 keys, 6270338 bytes, temperature: kUnknown
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153367524278, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6270338, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6246033, "index_size": 14980, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 80140, "raw_average_key_size": 24, "raw_value_size": 6184079, "raw_average_value_size": 1873, "num_data_blocks": 655, "num_entries": 3300, "num_filter_entries": 3300, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760153367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.524658) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6270338 bytes
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.526130) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.8 rd, 144.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.5 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(14.0) write-amplify(6.3) OK, records in: 3814, records dropped: 514 output_compression: NoCompression
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.526160) EVENT_LOG_v1 {"time_micros": 1760153367526146, "job": 8, "event": "compaction_finished", "compaction_time_micros": 43342, "compaction_time_cpu_micros": 26318, "output_level": 6, "num_output_files": 1, "total_output_size": 6270338, "num_input_records": 3814, "num_output_records": 3300, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153367526579, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153367528842, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.480802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.528897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.528903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.528906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.528909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:29:27 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:29:27.528912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:29:27
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', '.rgw.root', 'volumes', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:29:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:29:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:28 np0005480824 python3.9[146847]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:c0:16:5a:16" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:29:28 np0005480824 ovs-vsctl[146848]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:c0:16:5a:16 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 10 23:29:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:29:29 np0005480824 python3.9[147000]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:29:29 np0005480824 python3.9[147155]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:29:29 np0005480824 ovs-vsctl[147156]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 10 23:29:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:30 np0005480824 python3.9[147306]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:29:31 np0005480824 python3.9[147460]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:29:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:32 np0005480824 python3.9[147612]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:32 np0005480824 python3.9[147690]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:29:33 np0005480824 python3.9[147842]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:33 np0005480824 python3.9[147920]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:29:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:29:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:34 np0005480824 python3.9[148072]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:35 np0005480824 python3.9[148224]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:35 np0005480824 python3.9[148302]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:36 np0005480824 python3.9[148454]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:37 np0005480824 python3.9[148532]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:29:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:29:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:38 np0005480824 python3.9[148684]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:29:38 np0005480824 systemd[1]: Reloading.
Oct 10 23:29:38 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:29:38 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:29:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:29:39 np0005480824 python3.9[148873]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:39 np0005480824 python3.9[148951]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:40 np0005480824 python3.9[149103]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:40 np0005480824 python3.9[149181]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:41 np0005480824 python3.9[149333]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:29:41 np0005480824 systemd[1]: Reloading.
Oct 10 23:29:41 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:29:41 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:29:41 np0005480824 systemd[1]: Starting Create netns directory...
Oct 10 23:29:41 np0005480824 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 23:29:41 np0005480824 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 23:29:41 np0005480824 systemd[1]: Finished Create netns directory.
Oct 10 23:29:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:42 np0005480824 python3.9[149526]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:29:43 np0005480824 python3.9[149678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:29:43 np0005480824 python3.9[149801]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153382.8846729-468-251064451699470/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:29:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:44 np0005480824 python3.9[149953]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:29:45 np0005480824 python3.9[150105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:29:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:46 np0005480824 python3.9[150228]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153385.0763855-493-242219883612844/.source.json _original_basename=.66ukwf56 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:47 np0005480824 python3.9[150380]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:29:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:29:49 np0005480824 python3.9[150807]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 10 23:29:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:50 np0005480824 python3.9[150959]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 23:29:51 np0005480824 python3.9[151111]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 23:29:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:53 np0005480824 python3[151290]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 23:29:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:29:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:57 np0005480824 podman[151303]: 2025-10-11 03:29:57.844449798 +0000 UTC m=+4.701765774 image pull 3b86aea1acd0e80af91d8a3efa79cc99f54489e3c22377193c4282a256797350 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 10 23:29:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:29:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:29:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:29:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:29:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:29:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:29:57 np0005480824 podman[151424]: 2025-10-11 03:29:57.988768022 +0000 UTC m=+0.057283391 container create 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:29:57 np0005480824 podman[151424]: 2025-10-11 03:29:57.956561547 +0000 UTC m=+0.025077006 image pull 3b86aea1acd0e80af91d8a3efa79cc99f54489e3c22377193c4282a256797350 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 10 23:29:57 np0005480824 python3[151290]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 10 23:29:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:29:58 np0005480824 python3.9[151614]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:29:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:29:59 np0005480824 python3.9[151768]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:30:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:00 np0005480824 python3.9[151844]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:30:00 np0005480824 python3.9[152095]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760153400.1695006-581-260048175890105/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:30:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:30:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:30:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:30:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:30:01 np0005480824 python3.9[152269]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 23:30:01 np0005480824 systemd[1]: Reloading.
Oct 10 23:30:01 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:30:01 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:30:01 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev e7a14b4f-b974-441c-9124-b349b290fa9a does not exist
Oct 10 23:30:01 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 8a282891-edd0-4354-a5cf-3ff9e564e764 does not exist
Oct 10 23:30:01 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 810bf67c-9760-497b-b0ae-ed339526154c does not exist
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:30:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:30:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:02 np0005480824 podman[152575]: 2025-10-11 03:30:02.243079987 +0000 UTC m=+0.059730198 container create 932619d6cec8580e5e9daabbc455915b48172c2356e7ff607ce253f5e9a0fd2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldstine, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:30:02 np0005480824 python3.9[152535]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:30:02 np0005480824 systemd[1]: Started libpod-conmon-932619d6cec8580e5e9daabbc455915b48172c2356e7ff607ce253f5e9a0fd2e.scope.
Oct 10 23:30:02 np0005480824 podman[152575]: 2025-10-11 03:30:02.210582656 +0000 UTC m=+0.027232827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:30:02 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:30:02 np0005480824 podman[152575]: 2025-10-11 03:30:02.339191478 +0000 UTC m=+0.155841689 container init 932619d6cec8580e5e9daabbc455915b48172c2356e7ff607ce253f5e9a0fd2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldstine, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:30:02 np0005480824 systemd[1]: Reloading.
Oct 10 23:30:02 np0005480824 podman[152575]: 2025-10-11 03:30:02.353368684 +0000 UTC m=+0.170018865 container start 932619d6cec8580e5e9daabbc455915b48172c2356e7ff607ce253f5e9a0fd2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldstine, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:30:02 np0005480824 podman[152575]: 2025-10-11 03:30:02.357048252 +0000 UTC m=+0.173698473 container attach 932619d6cec8580e5e9daabbc455915b48172c2356e7ff607ce253f5e9a0fd2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldstine, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 10 23:30:02 np0005480824 cool_goldstine[152593]: 167 167
Oct 10 23:30:02 np0005480824 podman[152575]: 2025-10-11 03:30:02.360758889 +0000 UTC m=+0.177409160 container died 932619d6cec8580e5e9daabbc455915b48172c2356e7ff607ce253f5e9a0fd2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldstine, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:30:02 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:30:02 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:30:02 np0005480824 systemd[1]: libpod-932619d6cec8580e5e9daabbc455915b48172c2356e7ff607ce253f5e9a0fd2e.scope: Deactivated successfully.
Oct 10 23:30:02 np0005480824 systemd[1]: var-lib-containers-storage-overlay-1b5d8de8965c8d7bf0d8302c6d6537a89ab6a2c091cf333eb0b7bd2866f220db-merged.mount: Deactivated successfully.
Oct 10 23:30:02 np0005480824 systemd[1]: Starting ovn_controller container...
Oct 10 23:30:02 np0005480824 podman[152575]: 2025-10-11 03:30:02.64658404 +0000 UTC m=+0.463234251 container remove 932619d6cec8580e5e9daabbc455915b48172c2356e7ff607ce253f5e9a0fd2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 23:30:02 np0005480824 systemd[1]: libpod-conmon-932619d6cec8580e5e9daabbc455915b48172c2356e7ff607ce253f5e9a0fd2e.scope: Deactivated successfully.
Oct 10 23:30:02 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:30:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221faf47a3aac534811827a618c1a1e4d780f88ba77355c91c1a609fc5a29146/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 10 23:30:02 np0005480824 systemd[1]: Started /usr/bin/podman healthcheck run 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2.
Oct 10 23:30:02 np0005480824 podman[152651]: 2025-10-11 03:30:02.809485876 +0000 UTC m=+0.141384796 container init 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct 10 23:30:02 np0005480824 ovn_controller[152667]: + sudo -E kolla_set_configs
Oct 10 23:30:02 np0005480824 podman[152651]: 2025-10-11 03:30:02.842399016 +0000 UTC m=+0.174297926 container start 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:30:02 np0005480824 edpm-start-podman-container[152651]: ovn_controller
Oct 10 23:30:02 np0005480824 systemd[1]: Created slice User Slice of UID 0.
Oct 10 23:30:02 np0005480824 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 10 23:30:02 np0005480824 podman[152675]: 2025-10-11 03:30:02.882345205 +0000 UTC m=+0.069999582 container create 476e8c1be57e1ae1db9d1f24ae3504f5e464996d97b31613af09ba8a60d5df3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 23:30:02 np0005480824 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 10 23:30:02 np0005480824 systemd[1]: Starting User Manager for UID 0...
Oct 10 23:30:02 np0005480824 podman[152686]: 2025-10-11 03:30:02.929438971 +0000 UTC m=+0.074210911 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 23:30:02 np0005480824 systemd[1]: Started libpod-conmon-476e8c1be57e1ae1db9d1f24ae3504f5e464996d97b31613af09ba8a60d5df3d.scope.
Oct 10 23:30:02 np0005480824 systemd[1]: 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2-d9f1cc3a1b07bd0.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 23:30:02 np0005480824 systemd[1]: 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2-d9f1cc3a1b07bd0.service: Failed with result 'exit-code'.
Oct 10 23:30:02 np0005480824 podman[152675]: 2025-10-11 03:30:02.85857083 +0000 UTC m=+0.046225247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:30:02 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:30:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209fc9cd07266bfeb956a9802d223d252ca6c76efc000b9b37624cd9d28f4c59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:30:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209fc9cd07266bfeb956a9802d223d252ca6c76efc000b9b37624cd9d28f4c59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:30:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209fc9cd07266bfeb956a9802d223d252ca6c76efc000b9b37624cd9d28f4c59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:30:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209fc9cd07266bfeb956a9802d223d252ca6c76efc000b9b37624cd9d28f4c59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:30:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209fc9cd07266bfeb956a9802d223d252ca6c76efc000b9b37624cd9d28f4c59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:30:02 np0005480824 podman[152675]: 2025-10-11 03:30:02.997196399 +0000 UTC m=+0.184850796 container init 476e8c1be57e1ae1db9d1f24ae3504f5e464996d97b31613af09ba8a60d5df3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 10 23:30:03 np0005480824 podman[152675]: 2025-10-11 03:30:03.006808668 +0000 UTC m=+0.194463035 container start 476e8c1be57e1ae1db9d1f24ae3504f5e464996d97b31613af09ba8a60d5df3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kapitsa, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 10 23:30:03 np0005480824 podman[152675]: 2025-10-11 03:30:03.01748354 +0000 UTC m=+0.205137937 container attach 476e8c1be57e1ae1db9d1f24ae3504f5e464996d97b31613af09ba8a60d5df3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kapitsa, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 23:30:03 np0005480824 edpm-start-podman-container[152650]: Creating additional drop-in dependency for "ovn_controller" (65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2)
Oct 10 23:30:03 np0005480824 systemd[1]: Reloading.
Oct 10 23:30:03 np0005480824 systemd[152720]: Queued start job for default target Main User Target.
Oct 10 23:30:03 np0005480824 systemd[152720]: Created slice User Application Slice.
Oct 10 23:30:03 np0005480824 systemd[152720]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 10 23:30:03 np0005480824 systemd[152720]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 23:30:03 np0005480824 systemd[152720]: Reached target Paths.
Oct 10 23:30:03 np0005480824 systemd[152720]: Reached target Timers.
Oct 10 23:30:03 np0005480824 systemd[152720]: Starting D-Bus User Message Bus Socket...
Oct 10 23:30:03 np0005480824 systemd[152720]: Starting Create User's Volatile Files and Directories...
Oct 10 23:30:03 np0005480824 systemd[152720]: Finished Create User's Volatile Files and Directories.
Oct 10 23:30:03 np0005480824 systemd[152720]: Listening on D-Bus User Message Bus Socket.
Oct 10 23:30:03 np0005480824 systemd[152720]: Reached target Sockets.
Oct 10 23:30:03 np0005480824 systemd[152720]: Reached target Basic System.
Oct 10 23:30:03 np0005480824 systemd[152720]: Reached target Main User Target.
Oct 10 23:30:03 np0005480824 systemd[152720]: Startup finished in 178ms.
Oct 10 23:30:03 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:30:03 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:30:03 np0005480824 systemd[1]: Started User Manager for UID 0.
Oct 10 23:30:03 np0005480824 systemd[1]: Started ovn_controller container.
Oct 10 23:30:03 np0005480824 systemd[1]: Started Session c1 of User root.
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: INFO:__main__:Validating config file
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: INFO:__main__:Writing out command to execute
Oct 10 23:30:03 np0005480824 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: ++ cat /run_command
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: + ARGS=
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: + sudo kolla_copy_cacerts
Oct 10 23:30:03 np0005480824 systemd[1]: Started Session c2 of User root.
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: + [[ ! -n '' ]]
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: + . kolla_extend_start
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: + umask 0022
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 10 23:30:03 np0005480824 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 10 23:30:03 np0005480824 NetworkManager[44969]: <info>  [1760153403.5360] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 10 23:30:03 np0005480824 NetworkManager[44969]: <info>  [1760153403.5367] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:30:03 np0005480824 kernel: br-int: entered promiscuous mode
Oct 10 23:30:03 np0005480824 NetworkManager[44969]: <info>  [1760153403.5376] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 10 23:30:03 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 23:30:03 np0005480824 NetworkManager[44969]: <info>  [1760153403.5380] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 10 23:30:03 np0005480824 NetworkManager[44969]: <info>  [1760153403.5382] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00024|main|INFO|OVS feature set changed, force recompute.
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 23:30:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:03Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 23:30:03 np0005480824 NetworkManager[44969]: <info>  [1760153403.5671] manager: (ovn-52d58c-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 10 23:30:03 np0005480824 systemd-udevd[152867]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:30:03 np0005480824 systemd-udevd[152869]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:30:03 np0005480824 kernel: genev_sys_6081: entered promiscuous mode
Oct 10 23:30:03 np0005480824 NetworkManager[44969]: <info>  [1760153403.5837] device (genev_sys_6081): carrier: link connected
Oct 10 23:30:03 np0005480824 NetworkManager[44969]: <info>  [1760153403.5840] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct 10 23:30:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:30:04 np0005480824 python3.9[152971]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:30:04 np0005480824 ovs-vsctl[152983]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 10 23:30:04 np0005480824 interesting_kapitsa[152734]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:30:04 np0005480824 interesting_kapitsa[152734]: --> relative data size: 1.0
Oct 10 23:30:04 np0005480824 interesting_kapitsa[152734]: --> All data devices are unavailable
Oct 10 23:30:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:04 np0005480824 systemd[1]: libpod-476e8c1be57e1ae1db9d1f24ae3504f5e464996d97b31613af09ba8a60d5df3d.scope: Deactivated successfully.
Oct 10 23:30:04 np0005480824 podman[152675]: 2025-10-11 03:30:04.062968505 +0000 UTC m=+1.250622872 container died 476e8c1be57e1ae1db9d1f24ae3504f5e464996d97b31613af09ba8a60d5df3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 23:30:04 np0005480824 systemd[1]: var-lib-containers-storage-overlay-209fc9cd07266bfeb956a9802d223d252ca6c76efc000b9b37624cd9d28f4c59-merged.mount: Deactivated successfully.
Oct 10 23:30:04 np0005480824 podman[152675]: 2025-10-11 03:30:04.127382004 +0000 UTC m=+1.315036391 container remove 476e8c1be57e1ae1db9d1f24ae3504f5e464996d97b31613af09ba8a60d5df3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kapitsa, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:30:04 np0005480824 systemd[1]: libpod-conmon-476e8c1be57e1ae1db9d1f24ae3504f5e464996d97b31613af09ba8a60d5df3d.scope: Deactivated successfully.
Oct 10 23:30:04 np0005480824 python3.9[153250]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:30:04 np0005480824 ovs-vsctl[153306]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 10 23:30:04 np0005480824 podman[153291]: 2025-10-11 03:30:04.69455037 +0000 UTC m=+0.042779697 container create 18208ebc1a95165ac5c8c279470b77b05b0f48a042f1f6ea09628de6fe6e0821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 10 23:30:04 np0005480824 systemd[1]: Started libpod-conmon-18208ebc1a95165ac5c8c279470b77b05b0f48a042f1f6ea09628de6fe6e0821.scope.
Oct 10 23:30:04 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:30:04 np0005480824 podman[153291]: 2025-10-11 03:30:04.768644548 +0000 UTC m=+0.116873875 container init 18208ebc1a95165ac5c8c279470b77b05b0f48a042f1f6ea09628de6fe6e0821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:30:04 np0005480824 podman[153291]: 2025-10-11 03:30:04.677129457 +0000 UTC m=+0.025358794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:30:04 np0005480824 podman[153291]: 2025-10-11 03:30:04.77464854 +0000 UTC m=+0.122877897 container start 18208ebc1a95165ac5c8c279470b77b05b0f48a042f1f6ea09628de6fe6e0821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 23:30:04 np0005480824 podman[153291]: 2025-10-11 03:30:04.77800399 +0000 UTC m=+0.126233347 container attach 18208ebc1a95165ac5c8c279470b77b05b0f48a042f1f6ea09628de6fe6e0821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 10 23:30:04 np0005480824 dreamy_booth[153311]: 167 167
Oct 10 23:30:04 np0005480824 systemd[1]: libpod-18208ebc1a95165ac5c8c279470b77b05b0f48a042f1f6ea09628de6fe6e0821.scope: Deactivated successfully.
Oct 10 23:30:04 np0005480824 podman[153291]: 2025-10-11 03:30:04.78055073 +0000 UTC m=+0.128780057 container died 18208ebc1a95165ac5c8c279470b77b05b0f48a042f1f6ea09628de6fe6e0821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 10 23:30:04 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e8cdcf88b28e56b21af01f845df76d4fa34ddd5aa7190029a291f55e23068970-merged.mount: Deactivated successfully.
Oct 10 23:30:04 np0005480824 podman[153291]: 2025-10-11 03:30:04.820338685 +0000 UTC m=+0.168568012 container remove 18208ebc1a95165ac5c8c279470b77b05b0f48a042f1f6ea09628de6fe6e0821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:30:04 np0005480824 systemd[1]: libpod-conmon-18208ebc1a95165ac5c8c279470b77b05b0f48a042f1f6ea09628de6fe6e0821.scope: Deactivated successfully.
Oct 10 23:30:04 np0005480824 podman[153359]: 2025-10-11 03:30:04.969934614 +0000 UTC m=+0.043089804 container create 99b9f8dfbf0b4dd8e3b2565398cf23610707888b2b55f80dea9bc6e06ed55178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_spence, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 10 23:30:05 np0005480824 systemd[1]: Started libpod-conmon-99b9f8dfbf0b4dd8e3b2565398cf23610707888b2b55f80dea9bc6e06ed55178.scope.
Oct 10 23:30:05 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:30:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd16a397ebc1e1b82e44a6307be84364a05936ef21f1b5b3aa4caec2ef736bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:30:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd16a397ebc1e1b82e44a6307be84364a05936ef21f1b5b3aa4caec2ef736bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:30:05 np0005480824 podman[153359]: 2025-10-11 03:30:04.947945352 +0000 UTC m=+0.021100582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:30:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd16a397ebc1e1b82e44a6307be84364a05936ef21f1b5b3aa4caec2ef736bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:30:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd16a397ebc1e1b82e44a6307be84364a05936ef21f1b5b3aa4caec2ef736bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:30:05 np0005480824 podman[153359]: 2025-10-11 03:30:05.056266972 +0000 UTC m=+0.129422202 container init 99b9f8dfbf0b4dd8e3b2565398cf23610707888b2b55f80dea9bc6e06ed55178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:30:05 np0005480824 podman[153359]: 2025-10-11 03:30:05.064421895 +0000 UTC m=+0.137577085 container start 99b9f8dfbf0b4dd8e3b2565398cf23610707888b2b55f80dea9bc6e06ed55178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_spence, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:30:05 np0005480824 podman[153359]: 2025-10-11 03:30:05.067941629 +0000 UTC m=+0.141096849 container attach 99b9f8dfbf0b4dd8e3b2565398cf23610707888b2b55f80dea9bc6e06ed55178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:30:05 np0005480824 python3.9[153507]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:30:05 np0005480824 ovs-vsctl[153508]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 10 23:30:05 np0005480824 eager_spence[153398]: {
Oct 10 23:30:05 np0005480824 eager_spence[153398]:    "0": [
Oct 10 23:30:05 np0005480824 eager_spence[153398]:        {
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "devices": [
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "/dev/loop3"
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            ],
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_name": "ceph_lv0",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_size": "21470642176",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "name": "ceph_lv0",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "tags": {
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.cluster_name": "ceph",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.crush_device_class": "",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.encrypted": "0",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.osd_id": "0",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.type": "block",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.vdo": "0"
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            },
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "type": "block",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "vg_name": "ceph_vg0"
Oct 10 23:30:05 np0005480824 eager_spence[153398]:        }
Oct 10 23:30:05 np0005480824 eager_spence[153398]:    ],
Oct 10 23:30:05 np0005480824 eager_spence[153398]:    "1": [
Oct 10 23:30:05 np0005480824 eager_spence[153398]:        {
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "devices": [
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "/dev/loop4"
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            ],
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_name": "ceph_lv1",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_size": "21470642176",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "name": "ceph_lv1",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "tags": {
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.cluster_name": "ceph",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.crush_device_class": "",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.encrypted": "0",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.osd_id": "1",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.type": "block",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.vdo": "0"
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            },
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "type": "block",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "vg_name": "ceph_vg1"
Oct 10 23:30:05 np0005480824 eager_spence[153398]:        }
Oct 10 23:30:05 np0005480824 eager_spence[153398]:    ],
Oct 10 23:30:05 np0005480824 eager_spence[153398]:    "2": [
Oct 10 23:30:05 np0005480824 eager_spence[153398]:        {
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "devices": [
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "/dev/loop5"
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            ],
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_name": "ceph_lv2",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_size": "21470642176",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "name": "ceph_lv2",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "tags": {
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.cluster_name": "ceph",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.crush_device_class": "",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.encrypted": "0",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.osd_id": "2",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.type": "block",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:                "ceph.vdo": "0"
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            },
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "type": "block",
Oct 10 23:30:05 np0005480824 eager_spence[153398]:            "vg_name": "ceph_vg2"
Oct 10 23:30:05 np0005480824 eager_spence[153398]:        }
Oct 10 23:30:05 np0005480824 eager_spence[153398]:    ]
Oct 10 23:30:05 np0005480824 eager_spence[153398]: }
Oct 10 23:30:05 np0005480824 systemd[1]: libpod-99b9f8dfbf0b4dd8e3b2565398cf23610707888b2b55f80dea9bc6e06ed55178.scope: Deactivated successfully.
Oct 10 23:30:05 np0005480824 podman[153359]: 2025-10-11 03:30:05.840495638 +0000 UTC m=+0.913650888 container died 99b9f8dfbf0b4dd8e3b2565398cf23610707888b2b55f80dea9bc6e06ed55178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_spence, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 23:30:06 np0005480824 systemd[1]: session-46.scope: Deactivated successfully.
Oct 10 23:30:06 np0005480824 systemd[1]: session-46.scope: Consumed 1min 1.439s CPU time.
Oct 10 23:30:06 np0005480824 systemd-logind[782]: Session 46 logged out. Waiting for processes to exit.
Oct 10 23:30:06 np0005480824 systemd-logind[782]: Removed session 46.
Oct 10 23:30:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:06 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9fd16a397ebc1e1b82e44a6307be84364a05936ef21f1b5b3aa4caec2ef736bb-merged.mount: Deactivated successfully.
Oct 10 23:30:06 np0005480824 podman[153359]: 2025-10-11 03:30:06.894150357 +0000 UTC m=+1.967305547 container remove 99b9f8dfbf0b4dd8e3b2565398cf23610707888b2b55f80dea9bc6e06ed55178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 23:30:06 np0005480824 systemd[1]: libpod-conmon-99b9f8dfbf0b4dd8e3b2565398cf23610707888b2b55f80dea9bc6e06ed55178.scope: Deactivated successfully.
Oct 10 23:30:07 np0005480824 podman[153698]: 2025-10-11 03:30:07.602581155 +0000 UTC m=+0.065871724 container create 63d9a54c3294707a8fd9a1162a9cc6419ade03e5365d02fab3c9296d453737e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:30:07 np0005480824 systemd[1]: Started libpod-conmon-63d9a54c3294707a8fd9a1162a9cc6419ade03e5365d02fab3c9296d453737e6.scope.
Oct 10 23:30:07 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:30:07 np0005480824 podman[153698]: 2025-10-11 03:30:07.582372156 +0000 UTC m=+0.045662765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:30:07 np0005480824 podman[153698]: 2025-10-11 03:30:07.693872801 +0000 UTC m=+0.157163460 container init 63d9a54c3294707a8fd9a1162a9cc6419ade03e5365d02fab3c9296d453737e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 10 23:30:07 np0005480824 podman[153698]: 2025-10-11 03:30:07.705548668 +0000 UTC m=+0.168839257 container start 63d9a54c3294707a8fd9a1162a9cc6419ade03e5365d02fab3c9296d453737e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:30:07 np0005480824 podman[153698]: 2025-10-11 03:30:07.709567013 +0000 UTC m=+0.172857682 container attach 63d9a54c3294707a8fd9a1162a9cc6419ade03e5365d02fab3c9296d453737e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:30:07 np0005480824 romantic_almeida[153714]: 167 167
Oct 10 23:30:07 np0005480824 systemd[1]: libpod-63d9a54c3294707a8fd9a1162a9cc6419ade03e5365d02fab3c9296d453737e6.scope: Deactivated successfully.
Oct 10 23:30:07 np0005480824 podman[153719]: 2025-10-11 03:30:07.755855351 +0000 UTC m=+0.029197733 container died 63d9a54c3294707a8fd9a1162a9cc6419ade03e5365d02fab3c9296d453737e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_almeida, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:30:07 np0005480824 systemd[1]: var-lib-containers-storage-overlay-09a4e083686eeaf65b430e4d3cbcf052d4c1afca3ec9a0c95568141211cdbcb5-merged.mount: Deactivated successfully.
Oct 10 23:30:07 np0005480824 podman[153719]: 2025-10-11 03:30:07.809432453 +0000 UTC m=+0.082774835 container remove 63d9a54c3294707a8fd9a1162a9cc6419ade03e5365d02fab3c9296d453737e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_almeida, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:30:07 np0005480824 systemd[1]: libpod-conmon-63d9a54c3294707a8fd9a1162a9cc6419ade03e5365d02fab3c9296d453737e6.scope: Deactivated successfully.
Oct 10 23:30:08 np0005480824 podman[153741]: 2025-10-11 03:30:08.035414954 +0000 UTC m=+0.054378271 container create 81329a0028adbdda781e144d51c23e49a53295d5390165c803e26f6e5ab789c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 10 23:30:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:08 np0005480824 systemd[1]: Started libpod-conmon-81329a0028adbdda781e144d51c23e49a53295d5390165c803e26f6e5ab789c9.scope.
Oct 10 23:30:08 np0005480824 podman[153741]: 2025-10-11 03:30:08.008183768 +0000 UTC m=+0.027147095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:30:08 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:30:08 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5295da6c919c72865631dcf6d2b670641281c68b47b3edfec4d6e30648f22659/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:30:08 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5295da6c919c72865631dcf6d2b670641281c68b47b3edfec4d6e30648f22659/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:30:08 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5295da6c919c72865631dcf6d2b670641281c68b47b3edfec4d6e30648f22659/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:30:08 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5295da6c919c72865631dcf6d2b670641281c68b47b3edfec4d6e30648f22659/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:30:08 np0005480824 podman[153741]: 2025-10-11 03:30:08.140418655 +0000 UTC m=+0.159381992 container init 81329a0028adbdda781e144d51c23e49a53295d5390165c803e26f6e5ab789c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 10 23:30:08 np0005480824 podman[153741]: 2025-10-11 03:30:08.148546098 +0000 UTC m=+0.167509425 container start 81329a0028adbdda781e144d51c23e49a53295d5390165c803e26f6e5ab789c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:30:08 np0005480824 podman[153741]: 2025-10-11 03:30:08.152979813 +0000 UTC m=+0.171943110 container attach 81329a0028adbdda781e144d51c23e49a53295d5390165c803e26f6e5ab789c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 23:30:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:30:09 np0005480824 silly_clarke[153758]: {
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "osd_id": 0,
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "type": "bluestore"
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:    },
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "osd_id": 1,
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "type": "bluestore"
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:    },
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "osd_id": 2,
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:        "type": "bluestore"
Oct 10 23:30:09 np0005480824 silly_clarke[153758]:    }
Oct 10 23:30:09 np0005480824 silly_clarke[153758]: }
Oct 10 23:30:09 np0005480824 systemd[1]: libpod-81329a0028adbdda781e144d51c23e49a53295d5390165c803e26f6e5ab789c9.scope: Deactivated successfully.
Oct 10 23:30:09 np0005480824 podman[153741]: 2025-10-11 03:30:09.168488597 +0000 UTC m=+1.187451884 container died 81329a0028adbdda781e144d51c23e49a53295d5390165c803e26f6e5ab789c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:30:09 np0005480824 systemd[1]: libpod-81329a0028adbdda781e144d51c23e49a53295d5390165c803e26f6e5ab789c9.scope: Consumed 1.029s CPU time.
Oct 10 23:30:09 np0005480824 systemd[1]: var-lib-containers-storage-overlay-5295da6c919c72865631dcf6d2b670641281c68b47b3edfec4d6e30648f22659-merged.mount: Deactivated successfully.
Oct 10 23:30:09 np0005480824 podman[153741]: 2025-10-11 03:30:09.247803739 +0000 UTC m=+1.266767026 container remove 81329a0028adbdda781e144d51c23e49a53295d5390165c803e26f6e5ab789c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 23:30:09 np0005480824 systemd[1]: libpod-conmon-81329a0028adbdda781e144d51c23e49a53295d5390165c803e26f6e5ab789c9.scope: Deactivated successfully.
Oct 10 23:30:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:30:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:30:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:30:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:30:09 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 4939155b-51b2-476e-8d05-b0186a506fe4 does not exist
Oct 10 23:30:09 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev b6e59b59-c24b-4f76-996c-e8df40a26c36 does not exist
Oct 10 23:30:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:10 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:30:10 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:30:11 np0005480824 systemd-logind[782]: New session 48 of user zuul.
Oct 10 23:30:11 np0005480824 systemd[1]: Started Session 48 of User zuul.
Oct 10 23:30:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:12 np0005480824 python3.9[154005]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:30:13 np0005480824 python3.9[154161]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:13 np0005480824 systemd[1]: Stopping User Manager for UID 0...
Oct 10 23:30:13 np0005480824 systemd[152720]: Activating special unit Exit the Session...
Oct 10 23:30:13 np0005480824 systemd[152720]: Stopped target Main User Target.
Oct 10 23:30:13 np0005480824 systemd[152720]: Stopped target Basic System.
Oct 10 23:30:13 np0005480824 systemd[152720]: Stopped target Paths.
Oct 10 23:30:13 np0005480824 systemd[152720]: Stopped target Sockets.
Oct 10 23:30:13 np0005480824 systemd[152720]: Stopped target Timers.
Oct 10 23:30:13 np0005480824 systemd[152720]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 10 23:30:13 np0005480824 systemd[152720]: Closed D-Bus User Message Bus Socket.
Oct 10 23:30:13 np0005480824 systemd[152720]: Stopped Create User's Volatile Files and Directories.
Oct 10 23:30:13 np0005480824 systemd[152720]: Removed slice User Application Slice.
Oct 10 23:30:13 np0005480824 systemd[152720]: Reached target Shutdown.
Oct 10 23:30:13 np0005480824 systemd[152720]: Finished Exit the Session.
Oct 10 23:30:13 np0005480824 systemd[152720]: Reached target Exit the Session.
Oct 10 23:30:13 np0005480824 systemd[1]: user@0.service: Deactivated successfully.
Oct 10 23:30:13 np0005480824 systemd[1]: Stopped User Manager for UID 0.
Oct 10 23:30:13 np0005480824 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 10 23:30:13 np0005480824 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 10 23:30:13 np0005480824 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 10 23:30:13 np0005480824 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 10 23:30:13 np0005480824 systemd[1]: Removed slice User Slice of UID 0.
Oct 10 23:30:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:30:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:14 np0005480824 python3.9[154315]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:15 np0005480824 python3.9[154467]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:15 np0005480824 python3.9[154619]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:16 np0005480824 python3.9[154771]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:17 np0005480824 python3.9[154921]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:30:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:18 np0005480824 python3.9[155073]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 10 23:30:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:30:19 np0005480824 python3.9[155223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:30:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:20 np0005480824 python3.9[155344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153419.2261643-86-22859180467053/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:21 np0005480824 python3.9[155494]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:30:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:22 np0005480824 python3.9[155616]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153420.9205837-101-221177510229707/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:23 np0005480824 python3.9[155768]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:30:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:30:24 np0005480824 python3.9[155852]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:30:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:26 np0005480824 python3.9[156005]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 23:30:27 np0005480824 python3.9[156158]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:30:27
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['vms', 'default.rgw.meta', '.rgw.root', 'backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:30:27 np0005480824 python3.9[156279]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153426.6763928-138-259275905097964/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:30:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:30:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:28 np0005480824 python3.9[156429]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:30:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:30:29 np0005480824 python3.9[156550]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153428.0190823-138-98083233399068/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:30:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5424 writes, 23K keys, 5424 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5424 writes, 778 syncs, 6.97 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5424 writes, 23K keys, 5424 commit groups, 1.0 writes per commit group, ingest: 18.49 MB, 0.03 MB/s#012Interval WAL: 5424 writes, 778 syncs, 6.97 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b254ea91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b254ea91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct 10 23:30:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:30 np0005480824 python3.9[156700]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:30:31 np0005480824 python3.9[156821]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153429.9156818-182-122121352680228/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:31 np0005480824 python3.9[156971]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:30:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:32 np0005480824 python3.9[157092]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153431.2377427-182-163627707291895/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:33 np0005480824 python3.9[157242]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:30:33 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:33Z|00025|memory|INFO|16512 kB peak resident set size after 30.1 seconds
Oct 10 23:30:33 np0005480824 ovn_controller[152667]: 2025-10-11T03:30:33Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Oct 10 23:30:33 np0005480824 podman[157368]: 2025-10-11 03:30:33.642641298 +0000 UTC m=+0.100033336 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 23:30:33 np0005480824 python3.9[157419]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:30:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:34 np0005480824 python3.9[157575]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:30:34 np0005480824 python3.9[157653]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:30:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6725 writes, 28K keys, 6725 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6725 writes, 1131 syncs, 5.95 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6725 writes, 28K keys, 6725 commit groups, 1.0 writes per commit group, ingest: 19.57 MB, 0.03 MB/s#012Interval WAL: 6725 writes, 1131 syncs, 5.95 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55dbdc4a91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55dbdc4a91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct 10 23:30:35 np0005480824 python3.9[157805]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:30:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:36 np0005480824 python3.9[157883]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:36 np0005480824 python3.9[158035]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:30:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:30:37 np0005480824 python3.9[158187]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:30:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:38 np0005480824 python3.9[158265]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:30:38 np0005480824 python3.9[158417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:30:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:30:39 np0005480824 python3.9[158495]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:30:39 np0005480824 python3.9[158647]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:30:39 np0005480824 systemd[1]: Reloading.
Oct 10 23:30:40 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:30:40 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:30:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:30:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5479 writes, 23K keys, 5479 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5479 writes, 779 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5479 writes, 23K keys, 5479 commit groups, 1.0 writes per commit group, ingest: 18.51 MB, 0.03 MB/s#012Interval WAL: 5479 writes, 779 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5607b0be0dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5607b0be0dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct 10 23:30:41 np0005480824 python3.9[158836]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:30:41 np0005480824 python3.9[158914]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:30:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:42 np0005480824 python3.9[159066]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:30:42 np0005480824 python3.9[159144]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:30:42 np0005480824 ceph-mgr[74617]: [devicehealth INFO root] Check health
Oct 10 23:30:43 np0005480824 python3.9[159296]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:30:43 np0005480824 systemd[1]: Reloading.
Oct 10 23:30:43 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:30:43 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:30:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:30:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:44 np0005480824 systemd[1]: Starting Create netns directory...
Oct 10 23:30:44 np0005480824 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 23:30:44 np0005480824 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 23:30:44 np0005480824 systemd[1]: Finished Create netns directory.
Oct 10 23:30:44 np0005480824 python3.9[159489]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:45 np0005480824 python3.9[159641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:30:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:46 np0005480824 python3.9[159764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153445.2049377-333-73937814290535/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:47 np0005480824 python3.9[159916]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:30:47 np0005480824 python3.9[160068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:30:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:48 np0005480824 python3.9[160191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153447.3599663-358-231297057467184/.source.json _original_basename=.lpqufdan follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:30:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:30:49 np0005480824 python3.9[160343]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:30:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:51 np0005480824 python3.9[160770]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 10 23:30:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:52 np0005480824 python3.9[160922]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 23:30:53 np0005480824 python3.9[161074]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 23:30:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:30:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:54 np0005480824 python3[161254]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 23:30:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:30:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:30:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:30:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:30:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:30:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:30:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:30:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:31:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:03 np0005480824 podman[161267]: 2025-10-11 03:31:03.528016973 +0000 UTC m=+8.727951211 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:31:03 np0005480824 podman[161400]: 2025-10-11 03:31:03.801225896 +0000 UTC m=+0.106091371 container create dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 10 23:31:03 np0005480824 podman[161400]: 2025-10-11 03:31:03.732221061 +0000 UTC m=+0.037086596 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:31:03 np0005480824 python3[161254]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:31:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:31:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:04 np0005480824 podman[161432]: 2025-10-11 03:31:04.106648687 +0000 UTC m=+0.151138474 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:31:04 np0005480824 python3.9[161617]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:31:05 np0005480824 python3.9[161771]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:05 np0005480824 python3.9[161847]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:31:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:06 np0005480824 python3.9[161998]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760153465.91418-446-33736957871442/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:07 np0005480824 python3.9[162074]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 23:31:07 np0005480824 systemd[1]: Reloading.
Oct 10 23:31:07 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:31:07 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:31:08 np0005480824 python3.9[162185]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:31:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:08 np0005480824 systemd[1]: Reloading.
Oct 10 23:31:08 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:31:08 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:31:08 np0005480824 systemd[1]: Starting ovn_metadata_agent container...
Oct 10 23:31:08 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:31:08 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b7797612072be3e382f1201758259cdadf087660dcd7a1269c4ba7c8e105662/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:08 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b7797612072be3e382f1201758259cdadf087660dcd7a1269c4ba7c8e105662/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:08 np0005480824 systemd[1]: Started /usr/bin/podman healthcheck run dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab.
Oct 10 23:31:08 np0005480824 podman[162225]: 2025-10-11 03:31:08.590071514 +0000 UTC m=+0.149612507 container init dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: + sudo -E kolla_set_configs
Oct 10 23:31:08 np0005480824 podman[162225]: 2025-10-11 03:31:08.624107206 +0000 UTC m=+0.183648139 container start dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:31:08 np0005480824 edpm-start-podman-container[162225]: ovn_metadata_agent
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Validating config file
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 23:31:08 np0005480824 edpm-start-podman-container[162224]: Creating additional drop-in dependency for "ovn_metadata_agent" (dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab)
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Copying service configuration files
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 10 23:31:08 np0005480824 podman[162247]: 2025-10-11 03:31:08.699538334 +0000 UTC m=+0.064068738 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Writing out command to execute
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: ++ cat /run_command
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: + CMD=neutron-ovn-metadata-agent
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: + ARGS=
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: + sudo kolla_copy_cacerts
Oct 10 23:31:08 np0005480824 systemd[1]: Reloading.
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: + [[ ! -n '' ]]
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: + . kolla_extend_start
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: Running command: 'neutron-ovn-metadata-agent'
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: + umask 0022
Oct 10 23:31:08 np0005480824 ovn_metadata_agent[162240]: + exec neutron-ovn-metadata-agent
Oct 10 23:31:08 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:31:08 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:31:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:31:09 np0005480824 systemd[1]: Started ovn_metadata_agent container.
Oct 10 23:31:09 np0005480824 systemd[1]: session-48.scope: Deactivated successfully.
Oct 10 23:31:09 np0005480824 systemd[1]: session-48.scope: Consumed 57.579s CPU time.
Oct 10 23:31:09 np0005480824 systemd-logind[782]: Session 48 logged out. Waiting for processes to exit.
Oct 10 23:31:09 np0005480824 systemd-logind[782]: Removed session 48.
Oct 10 23:31:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:31:10 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev fac4357a-7625-4107-a1ed-7f00f5d1e08c does not exist
Oct 10 23:31:10 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 674ffb26-40f3-4458-a82b-5a10e432786e does not exist
Oct 10 23:31:10 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev c4422e92-ea12-4a5e-b19d-2e28c5e9b1fe does not exist
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.434 162245 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.434 162245 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.434 162245 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.434 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.434 162245 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.434 162245 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.435 162245 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.435 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.435 162245 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.435 162245 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.435 162245 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.435 162245 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.435 162245 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.435 162245 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.436 162245 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.436 162245 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.436 162245 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.436 162245 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.436 162245 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.436 162245 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.436 162245 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.436 162245 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.436 162245 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.436 162245 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.437 162245 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.437 162245 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.437 162245 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.437 162245 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.437 162245 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.437 162245 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.437 162245 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.437 162245 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.437 162245 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.437 162245 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.438 162245 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.438 162245 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.438 162245 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.438 162245 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.438 162245 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.438 162245 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.438 162245 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.438 162245 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.439 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.439 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.439 162245 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.439 162245 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.439 162245 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.439 162245 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.439 162245 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.439 162245 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.439 162245 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.439 162245 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.439 162245 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.440 162245 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.440 162245 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.440 162245 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.440 162245 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.440 162245 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.440 162245 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.440 162245 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.440 162245 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.440 162245 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.441 162245 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.441 162245 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.441 162245 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.441 162245 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.441 162245 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.441 162245 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.441 162245 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.441 162245 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.442 162245 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.442 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.442 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.442 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.442 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.442 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.442 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.442 162245 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.443 162245 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.443 162245 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.443 162245 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.443 162245 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.443 162245 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.443 162245 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.443 162245 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.443 162245 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.443 162245 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.444 162245 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.444 162245 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.444 162245 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.444 162245 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.444 162245 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.444 162245 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.444 162245 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.444 162245 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.444 162245 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.444 162245 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.445 162245 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.445 162245 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.445 162245 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.445 162245 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.445 162245 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.445 162245 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.445 162245 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.445 162245 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.445 162245 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.445 162245 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.446 162245 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.446 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.446 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.446 162245 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.446 162245 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.446 162245 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.446 162245 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.447 162245 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.447 162245 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.447 162245 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.447 162245 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.447 162245 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.447 162245 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.447 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.447 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.447 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.448 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.448 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.448 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.448 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.448 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.448 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.448 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.448 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.448 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.449 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.449 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.449 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.449 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.449 162245 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.449 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.449 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.449 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.449 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.450 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.450 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.450 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.450 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.450 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.450 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.450 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.450 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.450 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.451 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.451 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.451 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.451 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.451 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.451 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.451 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.451 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.451 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.452 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.452 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.452 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.452 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.452 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.452 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.452 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.452 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.452 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.453 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.453 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.453 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.453 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.453 162245 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.453 162245 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.453 162245 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.453 162245 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.453 162245 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.454 162245 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.454 162245 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.454 162245 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.454 162245 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.454 162245 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.454 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.454 162245 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.454 162245 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.454 162245 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.455 162245 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.455 162245 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.455 162245 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.455 162245 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.455 162245 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.455 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.455 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.455 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.455 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.456 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.456 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.456 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.456 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.456 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.456 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.456 162245 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.456 162245 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.456 162245 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.457 162245 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.457 162245 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.457 162245 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.457 162245 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.457 162245 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.457 162245 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.457 162245 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.457 162245 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.457 162245 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.457 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.458 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.458 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.458 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.458 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.458 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.458 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.458 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.458 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.458 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.459 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.459 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.459 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.459 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.459 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.459 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.459 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.459 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.459 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.459 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.460 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.460 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.460 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.460 162245 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.460 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.460 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.460 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.460 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.460 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.461 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.461 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.461 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.461 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.461 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.461 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.461 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.461 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.461 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.461 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.462 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.462 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.462 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.462 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.462 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.462 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.462 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.462 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.463 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.463 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.463 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.463 162245 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.463 162245 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.463 162245 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.463 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.463 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.463 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.464 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.464 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.464 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.464 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.464 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.464 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.464 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.464 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.464 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.464 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.465 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.465 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.465 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.465 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.465 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.465 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.465 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.465 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.465 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.466 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.466 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.466 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.466 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.466 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.466 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.466 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.466 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.466 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.467 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.467 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.467 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.467 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.467 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.467 162245 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.467 162245 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.475 162245 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.475 162245 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.475 162245 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.476 162245 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.476 162245 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:31:10 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.488 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 14b06507-d00b-4e27-a47d-46a5c2644635 (UUID: 14b06507-d00b-4e27-a47d-46a5c2644635) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.510 162245 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.511 162245 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.511 162245 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.511 162245 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.516 162245 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.521 162245 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.532 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '14b06507-d00b-4e27-a47d-46a5c2644635'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], external_ids={}, name=14b06507-d00b-4e27-a47d-46a5c2644635, nb_cfg_timestamp=1760153411575, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.533 162245 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f22d3a5e310>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.534 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.534 162245 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.534 162245 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.535 162245 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.539 162245 DEBUG oslo_service.service [-] Started child 162583 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.542 162245 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpf884qsmb/privsep.sock']#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.542 162583 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-428839'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.566 162583 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.566 162583 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.566 162583 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.569 162583 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.575 162583 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct 10 23:31:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:10.580 162583 INFO eventlet.wsgi.server [-] (162583) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct 10 23:31:10 np0005480824 podman[162630]: 2025-10-11 03:31:10.872008357 +0000 UTC m=+0.040958547 container create c0600376c9f156bab991f4a89a0f1d51daa276dac961b763f9d99992163ffc9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 10 23:31:10 np0005480824 systemd[1]: Started libpod-conmon-c0600376c9f156bab991f4a89a0f1d51daa276dac961b763f9d99992163ffc9a.scope.
Oct 10 23:31:10 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:31:10 np0005480824 podman[162630]: 2025-10-11 03:31:10.852166585 +0000 UTC m=+0.021116805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:31:10 np0005480824 podman[162630]: 2025-10-11 03:31:10.954449943 +0000 UTC m=+0.123400163 container init c0600376c9f156bab991f4a89a0f1d51daa276dac961b763f9d99992163ffc9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_stonebraker, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:31:10 np0005480824 podman[162630]: 2025-10-11 03:31:10.960965158 +0000 UTC m=+0.129915348 container start c0600376c9f156bab991f4a89a0f1d51daa276dac961b763f9d99992163ffc9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 23:31:10 np0005480824 podman[162630]: 2025-10-11 03:31:10.967211457 +0000 UTC m=+0.136161667 container attach c0600376c9f156bab991f4a89a0f1d51daa276dac961b763f9d99992163ffc9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_stonebraker, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 10 23:31:10 np0005480824 funny_stonebraker[162645]: 167 167
Oct 10 23:31:10 np0005480824 systemd[1]: libpod-c0600376c9f156bab991f4a89a0f1d51daa276dac961b763f9d99992163ffc9a.scope: Deactivated successfully.
Oct 10 23:31:10 np0005480824 podman[162630]: 2025-10-11 03:31:10.979356717 +0000 UTC m=+0.148306917 container died c0600376c9f156bab991f4a89a0f1d51daa276dac961b763f9d99992163ffc9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:31:11 np0005480824 systemd[1]: var-lib-containers-storage-overlay-610b371bc8028475681a6743402f3f6dd624648bbcd0707e8ff0f776662c399d-merged.mount: Deactivated successfully.
Oct 10 23:31:11 np0005480824 podman[162630]: 2025-10-11 03:31:11.019983056 +0000 UTC m=+0.188933246 container remove c0600376c9f156bab991f4a89a0f1d51daa276dac961b763f9d99992163ffc9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_stonebraker, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:31:11 np0005480824 systemd[1]: libpod-conmon-c0600376c9f156bab991f4a89a0f1d51daa276dac961b763f9d99992163ffc9a.scope: Deactivated successfully.
Oct 10 23:31:11 np0005480824 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 10 23:31:11 np0005480824 podman[162672]: 2025-10-11 03:31:11.18506564 +0000 UTC m=+0.036453440 container create 89514c871ff06bf7a736a8ebba5081a7f397e7d03b4292b1f4fa71ebfb5d3470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct 10 23:31:11 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:11.192 162245 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 10 23:31:11 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:11.192 162245 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpf884qsmb/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 10 23:31:11 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:11.086 162666 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 10 23:31:11 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:11.092 162666 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 10 23:31:11 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:11.095 162666 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct 10 23:31:11 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:11.095 162666 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162666#033[00m
Oct 10 23:31:11 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:11.194 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[79771154-887c-4af8-b934-3abb931a9599]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:31:11 np0005480824 systemd[1]: Started libpod-conmon-89514c871ff06bf7a736a8ebba5081a7f397e7d03b4292b1f4fa71ebfb5d3470.scope.
Oct 10 23:31:11 np0005480824 podman[162672]: 2025-10-11 03:31:11.168824874 +0000 UTC m=+0.020212684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:31:11 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:31:11 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a2e45fd7168df5928b24c4febcd9572f7c95bdbb58c8ec773fc11c539eac7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:11 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a2e45fd7168df5928b24c4febcd9572f7c95bdbb58c8ec773fc11c539eac7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:11 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a2e45fd7168df5928b24c4febcd9572f7c95bdbb58c8ec773fc11c539eac7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:11 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a2e45fd7168df5928b24c4febcd9572f7c95bdbb58c8ec773fc11c539eac7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:11 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a2e45fd7168df5928b24c4febcd9572f7c95bdbb58c8ec773fc11c539eac7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:11 np0005480824 podman[162672]: 2025-10-11 03:31:11.315399168 +0000 UTC m=+0.166786998 container init 89514c871ff06bf7a736a8ebba5081a7f397e7d03b4292b1f4fa71ebfb5d3470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:31:11 np0005480824 podman[162672]: 2025-10-11 03:31:11.324220048 +0000 UTC m=+0.175607868 container start 89514c871ff06bf7a736a8ebba5081a7f397e7d03b4292b1f4fa71ebfb5d3470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 10 23:31:11 np0005480824 podman[162672]: 2025-10-11 03:31:11.331847 +0000 UTC m=+0.183234850 container attach 89514c871ff06bf7a736a8ebba5081a7f397e7d03b4292b1f4fa71ebfb5d3470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 10 23:31:11 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:11.650 162666 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:31:11 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:11.650 162666 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:31:11 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:11.650 162666 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:31:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.167 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[6667df0a-f21d-442e-9db5-1623b71be1dd]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.169 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, column=external_ids, values=({'neutron:ovn-metadata-id': '6d1e1a3c-da55-5008-8698-e6f661700aa5'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.184 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.191 162245 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.191 162245 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.191 162245 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.191 162245 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.191 162245 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.192 162245 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.192 162245 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.192 162245 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.193 162245 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.193 162245 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.193 162245 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.193 162245 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.194 162245 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.194 162245 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.194 162245 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.194 162245 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.194 162245 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.195 162245 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.195 162245 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.195 162245 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.195 162245 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.195 162245 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.195 162245 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.196 162245 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.196 162245 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.196 162245 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.196 162245 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.196 162245 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.197 162245 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.197 162245 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.197 162245 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.197 162245 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.197 162245 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.197 162245 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.198 162245 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.198 162245 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.199 162245 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.199 162245 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.199 162245 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.199 162245 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.199 162245 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.199 162245 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.200 162245 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.200 162245 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.200 162245 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.200 162245 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.200 162245 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.201 162245 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.201 162245 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.202 162245 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.202 162245 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.202 162245 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.202 162245 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.202 162245 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.202 162245 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.203 162245 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.203 162245 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.203 162245 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.203 162245 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.203 162245 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.203 162245 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.204 162245 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.204 162245 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.204 162245 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.204 162245 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.204 162245 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.205 162245 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.205 162245 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.205 162245 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.205 162245 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.205 162245 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.205 162245 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.206 162245 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.206 162245 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.206 162245 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.206 162245 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.206 162245 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.206 162245 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.207 162245 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.207 162245 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.207 162245 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.207 162245 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.207 162245 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.207 162245 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.208 162245 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.208 162245 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.208 162245 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.208 162245 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.208 162245 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.208 162245 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.209 162245 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.209 162245 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.209 162245 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.209 162245 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.209 162245 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.209 162245 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.209 162245 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.210 162245 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.210 162245 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.210 162245 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.210 162245 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.210 162245 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.210 162245 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.211 162245 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.211 162245 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.211 162245 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.211 162245 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.211 162245 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.212 162245 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.212 162245 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.212 162245 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.212 162245 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.212 162245 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.212 162245 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.213 162245 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.213 162245 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.213 162245 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.213 162245 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.213 162245 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.213 162245 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.214 162245 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.214 162245 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.214 162245 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.214 162245 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.214 162245 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.215 162245 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.215 162245 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.215 162245 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.215 162245 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.215 162245 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.215 162245 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.216 162245 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.216 162245 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.216 162245 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.216 162245 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.216 162245 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.216 162245 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.217 162245 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.217 162245 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.217 162245 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.217 162245 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.217 162245 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.217 162245 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.218 162245 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.218 162245 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.218 162245 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.218 162245 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.218 162245 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.218 162245 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.219 162245 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.219 162245 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.219 162245 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.219 162245 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.219 162245 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.219 162245 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.220 162245 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.220 162245 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.220 162245 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.220 162245 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.220 162245 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.220 162245 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.220 162245 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.221 162245 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.221 162245 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.221 162245 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.221 162245 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.221 162245 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.221 162245 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.222 162245 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.222 162245 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.222 162245 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.223 162245 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.223 162245 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.223 162245 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.223 162245 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.224 162245 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.224 162245 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.224 162245 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.224 162245 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.225 162245 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.225 162245 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.225 162245 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.225 162245 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.226 162245 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.226 162245 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.226 162245 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.226 162245 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.226 162245 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.226 162245 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.227 162245 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.227 162245 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.227 162245 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.227 162245 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.227 162245 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.227 162245 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.227 162245 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.227 162245 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.227 162245 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.228 162245 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.228 162245 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.228 162245 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.228 162245 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.228 162245 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.228 162245 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.228 162245 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.228 162245 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.228 162245 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.228 162245 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.229 162245 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.229 162245 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.229 162245 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.229 162245 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.229 162245 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.229 162245 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.229 162245 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.229 162245 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.229 162245 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.229 162245 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.230 162245 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.230 162245 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.230 162245 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.230 162245 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.230 162245 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.230 162245 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.230 162245 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.230 162245 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.230 162245 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.230 162245 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.231 162245 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.231 162245 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.231 162245 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.231 162245 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.231 162245 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.231 162245 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.231 162245 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.231 162245 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.231 162245 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.231 162245 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.231 162245 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.232 162245 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.232 162245 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.232 162245 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.232 162245 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.232 162245 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.232 162245 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.232 162245 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.232 162245 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.232 162245 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.233 162245 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.233 162245 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.233 162245 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.233 162245 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.233 162245 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.233 162245 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.233 162245 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.233 162245 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.233 162245 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.233 162245 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.234 162245 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.234 162245 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.234 162245 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.234 162245 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.234 162245 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.234 162245 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.234 162245 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.234 162245 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.234 162245 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.234 162245 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.235 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.235 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.235 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.235 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.235 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.235 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.235 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.235 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.235 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.235 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.236 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.236 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.236 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.236 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.236 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.236 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.236 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.236 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.236 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.237 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.237 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.237 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.237 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.237 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.237 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.237 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.237 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.237 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.237 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.238 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.238 162245 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.238 162245 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.238 162245 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.238 162245 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.238 162245 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:31:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:31:12.238 162245 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 10 23:31:12 np0005480824 tender_easley[162692]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:31:12 np0005480824 tender_easley[162692]: --> relative data size: 1.0
Oct 10 23:31:12 np0005480824 tender_easley[162692]: --> All data devices are unavailable
Oct 10 23:31:12 np0005480824 systemd[1]: libpod-89514c871ff06bf7a736a8ebba5081a7f397e7d03b4292b1f4fa71ebfb5d3470.scope: Deactivated successfully.
Oct 10 23:31:12 np0005480824 systemd[1]: libpod-89514c871ff06bf7a736a8ebba5081a7f397e7d03b4292b1f4fa71ebfb5d3470.scope: Consumed 1.045s CPU time.
Oct 10 23:31:12 np0005480824 podman[162672]: 2025-10-11 03:31:12.464295448 +0000 UTC m=+1.315683288 container died 89514c871ff06bf7a736a8ebba5081a7f397e7d03b4292b1f4fa71ebfb5d3470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:31:12 np0005480824 systemd[1]: var-lib-containers-storage-overlay-01a2e45fd7168df5928b24c4febcd9572f7c95bdbb58c8ec773fc11c539eac7c-merged.mount: Deactivated successfully.
Oct 10 23:31:12 np0005480824 podman[162672]: 2025-10-11 03:31:12.535746652 +0000 UTC m=+1.387134462 container remove 89514c871ff06bf7a736a8ebba5081a7f397e7d03b4292b1f4fa71ebfb5d3470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_easley, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:31:12 np0005480824 systemd[1]: libpod-conmon-89514c871ff06bf7a736a8ebba5081a7f397e7d03b4292b1f4fa71ebfb5d3470.scope: Deactivated successfully.
Oct 10 23:31:13 np0005480824 podman[162873]: 2025-10-11 03:31:13.14387776 +0000 UTC m=+0.048594119 container create 395f0b46fdc67420d61369332e4fdc640abae30b8bbafaaa888c2a87596f38d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 23:31:13 np0005480824 systemd[1]: Started libpod-conmon-395f0b46fdc67420d61369332e4fdc640abae30b8bbafaaa888c2a87596f38d2.scope.
Oct 10 23:31:13 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:31:13 np0005480824 podman[162873]: 2025-10-11 03:31:13.122382677 +0000 UTC m=+0.027099056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:31:13 np0005480824 podman[162873]: 2025-10-11 03:31:13.222434723 +0000 UTC m=+0.127151112 container init 395f0b46fdc67420d61369332e4fdc640abae30b8bbafaaa888c2a87596f38d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:31:13 np0005480824 podman[162873]: 2025-10-11 03:31:13.229316097 +0000 UTC m=+0.134032476 container start 395f0b46fdc67420d61369332e4fdc640abae30b8bbafaaa888c2a87596f38d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:31:13 np0005480824 podman[162873]: 2025-10-11 03:31:13.232609966 +0000 UTC m=+0.137326355 container attach 395f0b46fdc67420d61369332e4fdc640abae30b8bbafaaa888c2a87596f38d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kowalevski, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 10 23:31:13 np0005480824 heuristic_kowalevski[162890]: 167 167
Oct 10 23:31:13 np0005480824 systemd[1]: libpod-395f0b46fdc67420d61369332e4fdc640abae30b8bbafaaa888c2a87596f38d2.scope: Deactivated successfully.
Oct 10 23:31:13 np0005480824 podman[162873]: 2025-10-11 03:31:13.236118819 +0000 UTC m=+0.140835188 container died 395f0b46fdc67420d61369332e4fdc640abae30b8bbafaaa888c2a87596f38d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kowalevski, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 23:31:13 np0005480824 systemd[1]: var-lib-containers-storage-overlay-10a5a5d993198709868f6b11065601dcec244687f359071a6eb7c47a2da2682a-merged.mount: Deactivated successfully.
Oct 10 23:31:13 np0005480824 podman[162873]: 2025-10-11 03:31:13.283454207 +0000 UTC m=+0.188170576 container remove 395f0b46fdc67420d61369332e4fdc640abae30b8bbafaaa888c2a87596f38d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:31:13 np0005480824 systemd[1]: libpod-conmon-395f0b46fdc67420d61369332e4fdc640abae30b8bbafaaa888c2a87596f38d2.scope: Deactivated successfully.
Oct 10 23:31:13 np0005480824 podman[162916]: 2025-10-11 03:31:13.498080734 +0000 UTC m=+0.044553632 container create bca9a90abcc0c13a328111d7eeb50fda327890a2d9668da47dc14517c2e36326 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wozniak, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:31:13 np0005480824 systemd[1]: Started libpod-conmon-bca9a90abcc0c13a328111d7eeb50fda327890a2d9668da47dc14517c2e36326.scope.
Oct 10 23:31:13 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:31:13 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e244ea5975e11d6f0ce0b6eb626d687558dbc5578e44e4970a1297658489eefd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:13 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e244ea5975e11d6f0ce0b6eb626d687558dbc5578e44e4970a1297658489eefd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:13 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e244ea5975e11d6f0ce0b6eb626d687558dbc5578e44e4970a1297658489eefd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:13 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e244ea5975e11d6f0ce0b6eb626d687558dbc5578e44e4970a1297658489eefd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:13 np0005480824 podman[162916]: 2025-10-11 03:31:13.480460904 +0000 UTC m=+0.026933812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:31:13 np0005480824 podman[162916]: 2025-10-11 03:31:13.587503036 +0000 UTC m=+0.133975934 container init bca9a90abcc0c13a328111d7eeb50fda327890a2d9668da47dc14517c2e36326 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wozniak, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 23:31:13 np0005480824 podman[162916]: 2025-10-11 03:31:13.599747448 +0000 UTC m=+0.146220346 container start bca9a90abcc0c13a328111d7eeb50fda327890a2d9668da47dc14517c2e36326 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wozniak, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:31:13 np0005480824 podman[162916]: 2025-10-11 03:31:13.602938574 +0000 UTC m=+0.149411542 container attach bca9a90abcc0c13a328111d7eeb50fda327890a2d9668da47dc14517c2e36326 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 23:31:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:31:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]: {
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:    "0": [
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:        {
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "devices": [
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "/dev/loop3"
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            ],
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_name": "ceph_lv0",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_size": "21470642176",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "name": "ceph_lv0",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "tags": {
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.cluster_name": "ceph",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.crush_device_class": "",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.encrypted": "0",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.osd_id": "0",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.type": "block",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.vdo": "0"
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            },
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "type": "block",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "vg_name": "ceph_vg0"
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:        }
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:    ],
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:    "1": [
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:        {
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "devices": [
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "/dev/loop4"
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            ],
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_name": "ceph_lv1",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_size": "21470642176",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "name": "ceph_lv1",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "tags": {
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.cluster_name": "ceph",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.crush_device_class": "",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.encrypted": "0",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.osd_id": "1",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.type": "block",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.vdo": "0"
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            },
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "type": "block",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "vg_name": "ceph_vg1"
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:        }
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:    ],
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:    "2": [
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:        {
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "devices": [
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "/dev/loop5"
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            ],
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_name": "ceph_lv2",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_size": "21470642176",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "name": "ceph_lv2",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "tags": {
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.cluster_name": "ceph",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.crush_device_class": "",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.encrypted": "0",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.osd_id": "2",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.type": "block",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:                "ceph.vdo": "0"
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            },
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "type": "block",
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:            "vg_name": "ceph_vg2"
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:        }
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]:    ]
Oct 10 23:31:14 np0005480824 pensive_wozniak[162932]: }
Oct 10 23:31:14 np0005480824 systemd[1]: libpod-bca9a90abcc0c13a328111d7eeb50fda327890a2d9668da47dc14517c2e36326.scope: Deactivated successfully.
Oct 10 23:31:14 np0005480824 podman[162916]: 2025-10-11 03:31:14.387635112 +0000 UTC m=+0.934108010 container died bca9a90abcc0c13a328111d7eeb50fda327890a2d9668da47dc14517c2e36326 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 23:31:14 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e244ea5975e11d6f0ce0b6eb626d687558dbc5578e44e4970a1297658489eefd-merged.mount: Deactivated successfully.
Oct 10 23:31:14 np0005480824 podman[162916]: 2025-10-11 03:31:14.462175299 +0000 UTC m=+1.008648227 container remove bca9a90abcc0c13a328111d7eeb50fda327890a2d9668da47dc14517c2e36326 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 23:31:14 np0005480824 systemd[1]: libpod-conmon-bca9a90abcc0c13a328111d7eeb50fda327890a2d9668da47dc14517c2e36326.scope: Deactivated successfully.
Oct 10 23:31:14 np0005480824 systemd-logind[782]: New session 49 of user zuul.
Oct 10 23:31:14 np0005480824 systemd[1]: Started Session 49 of User zuul.
Oct 10 23:31:15 np0005480824 podman[163174]: 2025-10-11 03:31:15.244895519 +0000 UTC m=+0.055947965 container create ca521f39058bb7e603fd133eec9f742be3b0afcdf470a7d3ace26cc2ad314793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 23:31:15 np0005480824 systemd[1]: Started libpod-conmon-ca521f39058bb7e603fd133eec9f742be3b0afcdf470a7d3ace26cc2ad314793.scope.
Oct 10 23:31:15 np0005480824 podman[163174]: 2025-10-11 03:31:15.226665205 +0000 UTC m=+0.037717671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:31:15 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:31:15 np0005480824 podman[163174]: 2025-10-11 03:31:15.35103714 +0000 UTC m=+0.162089626 container init ca521f39058bb7e603fd133eec9f742be3b0afcdf470a7d3ace26cc2ad314793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:31:15 np0005480824 podman[163174]: 2025-10-11 03:31:15.363272922 +0000 UTC m=+0.174325408 container start ca521f39058bb7e603fd133eec9f742be3b0afcdf470a7d3ace26cc2ad314793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 23:31:15 np0005480824 podman[163174]: 2025-10-11 03:31:15.367634386 +0000 UTC m=+0.178686862 container attach ca521f39058bb7e603fd133eec9f742be3b0afcdf470a7d3ace26cc2ad314793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:31:15 np0005480824 reverent_cerf[163212]: 167 167
Oct 10 23:31:15 np0005480824 systemd[1]: libpod-ca521f39058bb7e603fd133eec9f742be3b0afcdf470a7d3ace26cc2ad314793.scope: Deactivated successfully.
Oct 10 23:31:15 np0005480824 podman[163174]: 2025-10-11 03:31:15.371976119 +0000 UTC m=+0.183028635 container died ca521f39058bb7e603fd133eec9f742be3b0afcdf470a7d3ace26cc2ad314793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 10 23:31:15 np0005480824 systemd[1]: var-lib-containers-storage-overlay-51baaeb996e8e763ad170829f5aa1caee8fba8f300abd3e036811e4c7960190c-merged.mount: Deactivated successfully.
Oct 10 23:31:15 np0005480824 podman[163174]: 2025-10-11 03:31:15.410042507 +0000 UTC m=+0.221094963 container remove ca521f39058bb7e603fd133eec9f742be3b0afcdf470a7d3ace26cc2ad314793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 23:31:15 np0005480824 systemd[1]: libpod-conmon-ca521f39058bb7e603fd133eec9f742be3b0afcdf470a7d3ace26cc2ad314793.scope: Deactivated successfully.
Oct 10 23:31:15 np0005480824 podman[163287]: 2025-10-11 03:31:15.646549945 +0000 UTC m=+0.061113988 container create 752664fcddb81a23834783e12cd6184b766616de79174ec6bf8f2d9b9cc27567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:31:15 np0005480824 systemd[1]: Started libpod-conmon-752664fcddb81a23834783e12cd6184b766616de79174ec6bf8f2d9b9cc27567.scope.
Oct 10 23:31:15 np0005480824 podman[163287]: 2025-10-11 03:31:15.627142692 +0000 UTC m=+0.041706755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:31:15 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:31:15 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd852c28889ad46e6940a05d45715786cd3a4aa39d733e714476f41ce2f55a3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:15 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd852c28889ad46e6940a05d45715786cd3a4aa39d733e714476f41ce2f55a3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:15 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd852c28889ad46e6940a05d45715786cd3a4aa39d733e714476f41ce2f55a3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:15 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd852c28889ad46e6940a05d45715786cd3a4aa39d733e714476f41ce2f55a3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:31:15 np0005480824 podman[163287]: 2025-10-11 03:31:15.759123219 +0000 UTC m=+0.173687292 container init 752664fcddb81a23834783e12cd6184b766616de79174ec6bf8f2d9b9cc27567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:31:15 np0005480824 podman[163287]: 2025-10-11 03:31:15.772730783 +0000 UTC m=+0.187294856 container start 752664fcddb81a23834783e12cd6184b766616de79174ec6bf8f2d9b9cc27567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 10 23:31:15 np0005480824 podman[163287]: 2025-10-11 03:31:15.779457774 +0000 UTC m=+0.194021857 container attach 752664fcddb81a23834783e12cd6184b766616de79174ec6bf8f2d9b9cc27567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:31:15 np0005480824 python3.9[163281]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:31:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]: {
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "osd_id": 0,
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "type": "bluestore"
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:    },
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "osd_id": 1,
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "type": "bluestore"
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:    },
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "osd_id": 2,
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:        "type": "bluestore"
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]:    }
Oct 10 23:31:16 np0005480824 vigilant_aryabhata[163304]: }
Oct 10 23:31:16 np0005480824 systemd[1]: libpod-752664fcddb81a23834783e12cd6184b766616de79174ec6bf8f2d9b9cc27567.scope: Deactivated successfully.
Oct 10 23:31:16 np0005480824 systemd[1]: libpod-752664fcddb81a23834783e12cd6184b766616de79174ec6bf8f2d9b9cc27567.scope: Consumed 1.020s CPU time.
Oct 10 23:31:16 np0005480824 podman[163287]: 2025-10-11 03:31:16.796188403 +0000 UTC m=+1.210752436 container died 752664fcddb81a23834783e12cd6184b766616de79174ec6bf8f2d9b9cc27567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:31:16 np0005480824 systemd[1]: var-lib-containers-storage-overlay-bd852c28889ad46e6940a05d45715786cd3a4aa39d733e714476f41ce2f55a3b-merged.mount: Deactivated successfully.
Oct 10 23:31:16 np0005480824 podman[163287]: 2025-10-11 03:31:16.846743728 +0000 UTC m=+1.261307761 container remove 752664fcddb81a23834783e12cd6184b766616de79174ec6bf8f2d9b9cc27567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:31:16 np0005480824 systemd[1]: libpod-conmon-752664fcddb81a23834783e12cd6184b766616de79174ec6bf8f2d9b9cc27567.scope: Deactivated successfully.
Oct 10 23:31:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:31:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:31:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:31:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:31:16 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev e44be1b5-9a21-4932-a85d-1ff75e5d08fb does not exist
Oct 10 23:31:16 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev fe69b3fe-9e39-4497-a0b2-623f6020248f does not exist
Oct 10 23:31:17 np0005480824 python3.9[163500]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:31:17 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:31:17 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:31:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:18 np0005480824 python3.9[163720]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 23:31:18 np0005480824 systemd[1]: Reloading.
Oct 10 23:31:18 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:31:18 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:31:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:31:19 np0005480824 python3.9[163904]: ansible-ansible.builtin.service_facts Invoked
Oct 10 23:31:19 np0005480824 network[163921]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 23:31:19 np0005480824 network[163922]: 'network-scripts' will be removed from distribution in near future.
Oct 10 23:31:19 np0005480824 network[163923]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 23:31:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:31:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:25 np0005480824 python3.9[164188]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:31:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:26 np0005480824 python3.9[164341]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:31:27
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'backups']
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:31:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:31:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:28 np0005480824 python3.9[164494]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:31:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:31:29 np0005480824 python3.9[164647]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:31:29 np0005480824 python3.9[164800]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:31:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:30 np0005480824 python3.9[164953]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:31:31 np0005480824 python3.9[165106]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:31:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:32 np0005480824 python3.9[165259]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:33 np0005480824 python3.9[165411]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:33 np0005480824 python3.9[165563]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:31:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:34 np0005480824 podman[165687]: 2025-10-11 03:31:34.448470713 +0000 UTC m=+0.157815887 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:31:35 np0005480824 python3.9[165733]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:35 np0005480824 python3.9[165895]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:36 np0005480824 python3.9[166047]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:37 np0005480824 python3.9[166199]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:31:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:31:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:38 np0005480824 python3.9[166351]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:38 np0005480824 python3.9[166503]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:31:39 np0005480824 podman[166548]: 2025-10-11 03:31:39.044070837 +0000 UTC m=+0.096328880 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 10 23:31:39 np0005480824 python3.9[166672]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:39 np0005480824 python3.9[166824]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:40 np0005480824 python3.9[166976]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:41 np0005480824 python3.9[167128]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:41 np0005480824 python3.9[167280]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:31:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:42 np0005480824 python3.9[167432]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:31:43 np0005480824 python3.9[167584]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 23:31:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:31:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:44 np0005480824 python3.9[167736]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 23:31:44 np0005480824 systemd[1]: Reloading.
Oct 10 23:31:44 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:31:44 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:31:45 np0005480824 python3.9[167922]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:31:45 np0005480824 python3.9[168075]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:31:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:46 np0005480824 python3.9[168228]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:31:47 np0005480824 python3.9[168381]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:31:47 np0005480824 python3.9[168534]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:31:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:48 np0005480824 python3.9[168687]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:31:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:31:49 np0005480824 python3.9[168840]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:31:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:50 np0005480824 python3.9[168993]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 10 23:31:51 np0005480824 python3.9[169146]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 23:31:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:52 np0005480824 python3.9[169304]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 23:31:53 np0005480824 python3.9[169464]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:31:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:31:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:54 np0005480824 python3.9[169548]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:31:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:31:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:31:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:31:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:31:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:31:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:31:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:31:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:32:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Oct 10 23:32:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:32:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 10 23:32:05 np0005480824 podman[169720]: 2025-10-11 03:32:05.094717691 +0000 UTC m=+0.144539247 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 23:32:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 10 23:32:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 10 23:32:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:32:10 np0005480824 podman[169764]: 2025-10-11 03:32:10.013421761 +0000 UTC m=+0.066586656 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct 10 23:32:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 10 23:32:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:32:10.469 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:32:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:32:10.470 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:32:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:32:10.470 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:32:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 10 23:32:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:32:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Oct 10 23:32:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:17 np0005480824 podman[169954]: 2025-10-11 03:32:17.780315921 +0000 UTC m=+0.061968488 container exec a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 23:32:17 np0005480824 podman[169974]: 2025-10-11 03:32:17.923708759 +0000 UTC m=+0.047920600 container exec_died a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:32:17 np0005480824 podman[169954]: 2025-10-11 03:32:17.931379798 +0000 UTC m=+0.213032395 container exec_died a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 10 23:32:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:32:18 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:32:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:32:18 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:32:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:32:19 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 9f54ec49-6f0d-4249-9b26-69ac08c34971 does not exist
Oct 10 23:32:19 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 281e51a3-feb1-4ac5-ac43-cc84382e704a does not exist
Oct 10 23:32:19 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 6fab69ac-2e58-4110-b34b-a537752dd360 does not exist
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:32:19 np0005480824 kernel: SELinux:  Converting 2766 SID table entries...
Oct 10 23:32:19 np0005480824 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 23:32:19 np0005480824 kernel: SELinux:  policy capability open_perms=1
Oct 10 23:32:19 np0005480824 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 23:32:19 np0005480824 kernel: SELinux:  policy capability always_check_network=0
Oct 10 23:32:19 np0005480824 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 23:32:19 np0005480824 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 23:32:19 np0005480824 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 23:32:19 np0005480824 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:32:19 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:32:19 np0005480824 podman[170391]: 2025-10-11 03:32:19.859153775 +0000 UTC m=+0.046240761 container create 4110a6331fadb408e1e5cd9d907400b1be91f285d6ad23780cb9e29a0deb7301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:32:19 np0005480824 systemd[1]: Started libpod-conmon-4110a6331fadb408e1e5cd9d907400b1be91f285d6ad23780cb9e29a0deb7301.scope.
Oct 10 23:32:19 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:32:19 np0005480824 podman[170391]: 2025-10-11 03:32:19.830655169 +0000 UTC m=+0.017742175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:32:19 np0005480824 podman[170391]: 2025-10-11 03:32:19.938926478 +0000 UTC m=+0.126013474 container init 4110a6331fadb408e1e5cd9d907400b1be91f285d6ad23780cb9e29a0deb7301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:32:19 np0005480824 podman[170391]: 2025-10-11 03:32:19.9467576 +0000 UTC m=+0.133844586 container start 4110a6331fadb408e1e5cd9d907400b1be91f285d6ad23780cb9e29a0deb7301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:32:19 np0005480824 podman[170391]: 2025-10-11 03:32:19.950219061 +0000 UTC m=+0.137306067 container attach 4110a6331fadb408e1e5cd9d907400b1be91f285d6ad23780cb9e29a0deb7301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 10 23:32:19 np0005480824 modest_carson[170408]: 167 167
Oct 10 23:32:19 np0005480824 systemd[1]: libpod-4110a6331fadb408e1e5cd9d907400b1be91f285d6ad23780cb9e29a0deb7301.scope: Deactivated successfully.
Oct 10 23:32:19 np0005480824 podman[170391]: 2025-10-11 03:32:19.951436019 +0000 UTC m=+0.138523005 container died 4110a6331fadb408e1e5cd9d907400b1be91f285d6ad23780cb9e29a0deb7301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 23:32:19 np0005480824 systemd[1]: var-lib-containers-storage-overlay-430e9e9bdd8f85aa6b875f630a90f36244da49124cba40528589b3acc8d3bd73-merged.mount: Deactivated successfully.
Oct 10 23:32:20 np0005480824 podman[170391]: 2025-10-11 03:32:20.015022655 +0000 UTC m=+0.202109641 container remove 4110a6331fadb408e1e5cd9d907400b1be91f285d6ad23780cb9e29a0deb7301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 10 23:32:20 np0005480824 systemd[1]: libpod-conmon-4110a6331fadb408e1e5cd9d907400b1be91f285d6ad23780cb9e29a0deb7301.scope: Deactivated successfully.
Oct 10 23:32:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:20 np0005480824 podman[170434]: 2025-10-11 03:32:20.172780989 +0000 UTC m=+0.054418202 container create 42388d7153732b44aff5cba8ff8eeea7cb5ce40d9bedc42cbb59b4b2c6e14918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kalam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:32:20 np0005480824 systemd[1]: Started libpod-conmon-42388d7153732b44aff5cba8ff8eeea7cb5ce40d9bedc42cbb59b4b2c6e14918.scope.
Oct 10 23:32:20 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:32:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9825b3346cac96ef921496a2308cbaebfc7842b401193b0465b4133d6476a0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:32:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9825b3346cac96ef921496a2308cbaebfc7842b401193b0465b4133d6476a0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:32:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9825b3346cac96ef921496a2308cbaebfc7842b401193b0465b4133d6476a0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:32:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9825b3346cac96ef921496a2308cbaebfc7842b401193b0465b4133d6476a0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:32:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9825b3346cac96ef921496a2308cbaebfc7842b401193b0465b4133d6476a0d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:32:20 np0005480824 podman[170434]: 2025-10-11 03:32:20.13943493 +0000 UTC m=+0.021072153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:32:20 np0005480824 podman[170434]: 2025-10-11 03:32:20.254240661 +0000 UTC m=+0.135877874 container init 42388d7153732b44aff5cba8ff8eeea7cb5ce40d9bedc42cbb59b4b2c6e14918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 23:32:20 np0005480824 podman[170434]: 2025-10-11 03:32:20.26661763 +0000 UTC m=+0.148254833 container start 42388d7153732b44aff5cba8ff8eeea7cb5ce40d9bedc42cbb59b4b2c6e14918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kalam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:32:20 np0005480824 podman[170434]: 2025-10-11 03:32:20.269841615 +0000 UTC m=+0.151478828 container attach 42388d7153732b44aff5cba8ff8eeea7cb5ce40d9bedc42cbb59b4b2c6e14918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kalam, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:32:21 np0005480824 eloquent_kalam[170451]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:32:21 np0005480824 eloquent_kalam[170451]: --> relative data size: 1.0
Oct 10 23:32:21 np0005480824 eloquent_kalam[170451]: --> All data devices are unavailable
Oct 10 23:32:21 np0005480824 systemd[1]: libpod-42388d7153732b44aff5cba8ff8eeea7cb5ce40d9bedc42cbb59b4b2c6e14918.scope: Deactivated successfully.
Oct 10 23:32:21 np0005480824 systemd[1]: libpod-42388d7153732b44aff5cba8ff8eeea7cb5ce40d9bedc42cbb59b4b2c6e14918.scope: Consumed 1.053s CPU time.
Oct 10 23:32:21 np0005480824 podman[170481]: 2025-10-11 03:32:21.439072628 +0000 UTC m=+0.027807950 container died 42388d7153732b44aff5cba8ff8eeea7cb5ce40d9bedc42cbb59b4b2c6e14918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:32:21 np0005480824 systemd[1]: var-lib-containers-storage-overlay-b9825b3346cac96ef921496a2308cbaebfc7842b401193b0465b4133d6476a0d-merged.mount: Deactivated successfully.
Oct 10 23:32:21 np0005480824 podman[170481]: 2025-10-11 03:32:21.523039039 +0000 UTC m=+0.111774331 container remove 42388d7153732b44aff5cba8ff8eeea7cb5ce40d9bedc42cbb59b4b2c6e14918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 10 23:32:21 np0005480824 systemd[1]: libpod-conmon-42388d7153732b44aff5cba8ff8eeea7cb5ce40d9bedc42cbb59b4b2c6e14918.scope: Deactivated successfully.
Oct 10 23:32:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:22 np0005480824 podman[170638]: 2025-10-11 03:32:22.208363792 +0000 UTC m=+0.041868779 container create 8f412738eb1cd3ecfc896fa38acb0022ff1b950d2a0b1f64dc4aef9942be7f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_benz, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:32:22 np0005480824 systemd[1]: Started libpod-conmon-8f412738eb1cd3ecfc896fa38acb0022ff1b950d2a0b1f64dc4aef9942be7f7e.scope.
Oct 10 23:32:22 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:32:22 np0005480824 podman[170638]: 2025-10-11 03:32:22.188350185 +0000 UTC m=+0.021855222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:32:22 np0005480824 podman[170638]: 2025-10-11 03:32:22.297984245 +0000 UTC m=+0.131489252 container init 8f412738eb1cd3ecfc896fa38acb0022ff1b950d2a0b1f64dc4aef9942be7f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_benz, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 23:32:22 np0005480824 podman[170638]: 2025-10-11 03:32:22.309873923 +0000 UTC m=+0.143378930 container start 8f412738eb1cd3ecfc896fa38acb0022ff1b950d2a0b1f64dc4aef9942be7f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_benz, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:32:22 np0005480824 podman[170638]: 2025-10-11 03:32:22.313332913 +0000 UTC m=+0.146837930 container attach 8f412738eb1cd3ecfc896fa38acb0022ff1b950d2a0b1f64dc4aef9942be7f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 23:32:22 np0005480824 amazing_benz[170654]: 167 167
Oct 10 23:32:22 np0005480824 systemd[1]: libpod-8f412738eb1cd3ecfc896fa38acb0022ff1b950d2a0b1f64dc4aef9942be7f7e.scope: Deactivated successfully.
Oct 10 23:32:22 np0005480824 podman[170638]: 2025-10-11 03:32:22.316391186 +0000 UTC m=+0.149896173 container died 8f412738eb1cd3ecfc896fa38acb0022ff1b950d2a0b1f64dc4aef9942be7f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 23:32:22 np0005480824 systemd[1]: var-lib-containers-storage-overlay-aba638b4c1323ade1cb89bb357e3f64de4941f2858138d26344191cfafa0ba88-merged.mount: Deactivated successfully.
Oct 10 23:32:22 np0005480824 podman[170638]: 2025-10-11 03:32:22.365135073 +0000 UTC m=+0.198640090 container remove 8f412738eb1cd3ecfc896fa38acb0022ff1b950d2a0b1f64dc4aef9942be7f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:32:22 np0005480824 systemd[1]: libpod-conmon-8f412738eb1cd3ecfc896fa38acb0022ff1b950d2a0b1f64dc4aef9942be7f7e.scope: Deactivated successfully.
Oct 10 23:32:22 np0005480824 podman[170680]: 2025-10-11 03:32:22.591793576 +0000 UTC m=+0.064337363 container create a8e12f22c6f10c70df2d76f6bbf01e1b4c58b7edc3e53c97261d11c8663706df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 23:32:22 np0005480824 systemd[1]: Started libpod-conmon-a8e12f22c6f10c70df2d76f6bbf01e1b4c58b7edc3e53c97261d11c8663706df.scope.
Oct 10 23:32:22 np0005480824 podman[170680]: 2025-10-11 03:32:22.557962266 +0000 UTC m=+0.030506063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:32:22 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:32:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411cd0ef39ff9cc6483c3e0e1860426c852c183c9dd0cf8e629147490f810850/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:32:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411cd0ef39ff9cc6483c3e0e1860426c852c183c9dd0cf8e629147490f810850/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:32:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411cd0ef39ff9cc6483c3e0e1860426c852c183c9dd0cf8e629147490f810850/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:32:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411cd0ef39ff9cc6483c3e0e1860426c852c183c9dd0cf8e629147490f810850/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:32:22 np0005480824 podman[170680]: 2025-10-11 03:32:22.685496824 +0000 UTC m=+0.158040581 container init a8e12f22c6f10c70df2d76f6bbf01e1b4c58b7edc3e53c97261d11c8663706df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:32:22 np0005480824 podman[170680]: 2025-10-11 03:32:22.699488741 +0000 UTC m=+0.172032538 container start a8e12f22c6f10c70df2d76f6bbf01e1b4c58b7edc3e53c97261d11c8663706df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:32:22 np0005480824 podman[170680]: 2025-10-11 03:32:22.705077881 +0000 UTC m=+0.177621668 container attach a8e12f22c6f10c70df2d76f6bbf01e1b4c58b7edc3e53c97261d11c8663706df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]: {
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:    "0": [
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:        {
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "devices": [
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "/dev/loop3"
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            ],
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_name": "ceph_lv0",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_size": "21470642176",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "name": "ceph_lv0",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "tags": {
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.cluster_name": "ceph",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.crush_device_class": "",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.encrypted": "0",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.osd_id": "0",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.type": "block",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.vdo": "0"
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            },
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "type": "block",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "vg_name": "ceph_vg0"
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:        }
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:    ],
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:    "1": [
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:        {
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "devices": [
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "/dev/loop4"
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            ],
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_name": "ceph_lv1",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_size": "21470642176",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "name": "ceph_lv1",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "tags": {
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.cluster_name": "ceph",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.crush_device_class": "",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.encrypted": "0",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.osd_id": "1",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.type": "block",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.vdo": "0"
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            },
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "type": "block",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "vg_name": "ceph_vg1"
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:        }
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:    ],
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:    "2": [
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:        {
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "devices": [
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "/dev/loop5"
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            ],
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_name": "ceph_lv2",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_size": "21470642176",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "name": "ceph_lv2",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "tags": {
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.cluster_name": "ceph",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.crush_device_class": "",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.encrypted": "0",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.osd_id": "2",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.type": "block",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:                "ceph.vdo": "0"
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            },
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "type": "block",
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:            "vg_name": "ceph_vg2"
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:        }
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]:    ]
Oct 10 23:32:23 np0005480824 modest_elgamal[170696]: }
Oct 10 23:32:23 np0005480824 systemd[1]: libpod-a8e12f22c6f10c70df2d76f6bbf01e1b4c58b7edc3e53c97261d11c8663706df.scope: Deactivated successfully.
Oct 10 23:32:23 np0005480824 podman[170680]: 2025-10-11 03:32:23.471048238 +0000 UTC m=+0.943592035 container died a8e12f22c6f10c70df2d76f6bbf01e1b4c58b7edc3e53c97261d11c8663706df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:32:23 np0005480824 systemd[1]: var-lib-containers-storage-overlay-411cd0ef39ff9cc6483c3e0e1860426c852c183c9dd0cf8e629147490f810850-merged.mount: Deactivated successfully.
Oct 10 23:32:23 np0005480824 podman[170680]: 2025-10-11 03:32:23.555485811 +0000 UTC m=+1.028029578 container remove a8e12f22c6f10c70df2d76f6bbf01e1b4c58b7edc3e53c97261d11c8663706df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:32:23 np0005480824 systemd[1]: libpod-conmon-a8e12f22c6f10c70df2d76f6bbf01e1b4c58b7edc3e53c97261d11c8663706df.scope: Deactivated successfully.
Oct 10 23:32:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:32:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:24 np0005480824 podman[170859]: 2025-10-11 03:32:24.226243944 +0000 UTC m=+0.044623353 container create 543fc6fda0952af04e6de002f67bcad6a0ce2763c8225b41806076aea92d8a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 10 23:32:24 np0005480824 systemd[1]: Started libpod-conmon-543fc6fda0952af04e6de002f67bcad6a0ce2763c8225b41806076aea92d8a38.scope.
Oct 10 23:32:24 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:32:24 np0005480824 podman[170859]: 2025-10-11 03:32:24.202299094 +0000 UTC m=+0.020678533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:32:24 np0005480824 podman[170859]: 2025-10-11 03:32:24.304772797 +0000 UTC m=+0.123152296 container init 543fc6fda0952af04e6de002f67bcad6a0ce2763c8225b41806076aea92d8a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 10 23:32:24 np0005480824 podman[170859]: 2025-10-11 03:32:24.31173699 +0000 UTC m=+0.130116429 container start 543fc6fda0952af04e6de002f67bcad6a0ce2763c8225b41806076aea92d8a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bouman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:32:24 np0005480824 festive_bouman[170875]: 167 167
Oct 10 23:32:24 np0005480824 systemd[1]: libpod-543fc6fda0952af04e6de002f67bcad6a0ce2763c8225b41806076aea92d8a38.scope: Deactivated successfully.
Oct 10 23:32:24 np0005480824 podman[170859]: 2025-10-11 03:32:24.316718076 +0000 UTC m=+0.135097525 container attach 543fc6fda0952af04e6de002f67bcad6a0ce2763c8225b41806076aea92d8a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bouman, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:32:24 np0005480824 podman[170859]: 2025-10-11 03:32:24.317152366 +0000 UTC m=+0.135531805 container died 543fc6fda0952af04e6de002f67bcad6a0ce2763c8225b41806076aea92d8a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:32:24 np0005480824 systemd[1]: var-lib-containers-storage-overlay-fc7de4c4a978e0aac9c4c00a5170cf63f3406f6c9db428542ad01fb4b6497ef5-merged.mount: Deactivated successfully.
Oct 10 23:32:24 np0005480824 podman[170859]: 2025-10-11 03:32:24.366005397 +0000 UTC m=+0.184384836 container remove 543fc6fda0952af04e6de002f67bcad6a0ce2763c8225b41806076aea92d8a38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bouman, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:32:24 np0005480824 systemd[1]: libpod-conmon-543fc6fda0952af04e6de002f67bcad6a0ce2763c8225b41806076aea92d8a38.scope: Deactivated successfully.
Oct 10 23:32:24 np0005480824 podman[170899]: 2025-10-11 03:32:24.615822521 +0000 UTC m=+0.068699665 container create 6dfaf82cabb42e80a282385f79ce89c8ef80bd80b556468c7bd2f7244374dd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:32:24 np0005480824 systemd[1]: Started libpod-conmon-6dfaf82cabb42e80a282385f79ce89c8ef80bd80b556468c7bd2f7244374dd52.scope.
Oct 10 23:32:24 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:32:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c336c3499dc71a029b942ae81bfd1629ed87bf5af1dd861ec6c7d9159a5cf41c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:32:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c336c3499dc71a029b942ae81bfd1629ed87bf5af1dd861ec6c7d9159a5cf41c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:32:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c336c3499dc71a029b942ae81bfd1629ed87bf5af1dd861ec6c7d9159a5cf41c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:32:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c336c3499dc71a029b942ae81bfd1629ed87bf5af1dd861ec6c7d9159a5cf41c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:32:24 np0005480824 podman[170899]: 2025-10-11 03:32:24.586726161 +0000 UTC m=+0.039603385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:32:24 np0005480824 podman[170899]: 2025-10-11 03:32:24.686764037 +0000 UTC m=+0.139641181 container init 6dfaf82cabb42e80a282385f79ce89c8ef80bd80b556468c7bd2f7244374dd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:32:24 np0005480824 podman[170899]: 2025-10-11 03:32:24.693723609 +0000 UTC m=+0.146600743 container start 6dfaf82cabb42e80a282385f79ce89c8ef80bd80b556468c7bd2f7244374dd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kowalevski, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 23:32:24 np0005480824 podman[170899]: 2025-10-11 03:32:24.697286833 +0000 UTC m=+0.150163997 container attach 6dfaf82cabb42e80a282385f79ce89c8ef80bd80b556468c7bd2f7244374dd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]: {
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "osd_id": 0,
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "type": "bluestore"
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:    },
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "osd_id": 1,
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "type": "bluestore"
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:    },
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "osd_id": 2,
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:        "type": "bluestore"
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]:    }
Oct 10 23:32:25 np0005480824 elated_kowalevski[170916]: }
Oct 10 23:32:25 np0005480824 systemd[1]: libpod-6dfaf82cabb42e80a282385f79ce89c8ef80bd80b556468c7bd2f7244374dd52.scope: Deactivated successfully.
Oct 10 23:32:25 np0005480824 systemd[1]: libpod-6dfaf82cabb42e80a282385f79ce89c8ef80bd80b556468c7bd2f7244374dd52.scope: Consumed 1.051s CPU time.
Oct 10 23:32:25 np0005480824 podman[170949]: 2025-10-11 03:32:25.779507475 +0000 UTC m=+0.025655790 container died 6dfaf82cabb42e80a282385f79ce89c8ef80bd80b556468c7bd2f7244374dd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kowalevski, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 10 23:32:25 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c336c3499dc71a029b942ae81bfd1629ed87bf5af1dd861ec6c7d9159a5cf41c-merged.mount: Deactivated successfully.
Oct 10 23:32:25 np0005480824 podman[170949]: 2025-10-11 03:32:25.822802696 +0000 UTC m=+0.068951001 container remove 6dfaf82cabb42e80a282385f79ce89c8ef80bd80b556468c7bd2f7244374dd52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kowalevski, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 10 23:32:25 np0005480824 systemd[1]: libpod-conmon-6dfaf82cabb42e80a282385f79ce89c8ef80bd80b556468c7bd2f7244374dd52.scope: Deactivated successfully.
Oct 10 23:32:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:32:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:32:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:32:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:32:25 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 331bfbd6-7f99-40f4-aed0-85cd55739b70 does not exist
Oct 10 23:32:25 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev eeba0f5b-d8f4-4a43-b3c6-ae50c1455118 does not exist
Oct 10 23:32:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:26 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:32:26 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:32:27
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', 'vms', 'backups', '.mgr', 'cephfs.cephfs.data', 'volumes']
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:32:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:32:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:28 np0005480824 kernel: SELinux:  Converting 2766 SID table entries...
Oct 10 23:32:28 np0005480824 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 23:32:28 np0005480824 kernel: SELinux:  policy capability open_perms=1
Oct 10 23:32:28 np0005480824 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 23:32:28 np0005480824 kernel: SELinux:  policy capability always_check_network=0
Oct 10 23:32:28 np0005480824 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 23:32:28 np0005480824 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 23:32:28 np0005480824 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 23:32:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:32:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:32:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:35 np0005480824 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 10 23:32:36 np0005480824 podman[171021]: 2025-10-11 03:32:36.039390049 +0000 UTC m=+0.097545929 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:32:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:32:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:32:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:32:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:41 np0005480824 podman[171118]: 2025-10-11 03:32:41.019511015 +0000 UTC m=+0.069126517 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent)
Oct 10 23:32:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:32:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:32:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:32:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:32:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:32:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:32:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:32:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:32:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:32:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:32:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:33:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:33:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:07 np0005480824 podman[185120]: 2025-10-11 03:33:07.056904779 +0000 UTC m=+0.105888020 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 23:33:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:33:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:33:10.470 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:33:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:33:10.470 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:33:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:33:10.470 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:33:12 np0005480824 podman[187831]: 2025-10-11 03:33:12.004822967 +0000 UTC m=+0.062398478 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:33:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:33:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:33:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.160993) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153603161092, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2040, "num_deletes": 251, "total_data_size": 3536354, "memory_usage": 3583808, "flush_reason": "Manual Compaction"}
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153603184863, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3461250, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9715, "largest_seqno": 11754, "table_properties": {"data_size": 3451940, "index_size": 5932, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17767, "raw_average_key_size": 19, "raw_value_size": 3433547, "raw_average_value_size": 3756, "num_data_blocks": 269, "num_entries": 914, "num_filter_entries": 914, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153368, "oldest_key_time": 1760153368, "file_creation_time": 1760153603, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 23893 microseconds, and 8946 cpu microseconds.
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.184911) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3461250 bytes OK
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.184935) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.187433) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.187448) EVENT_LOG_v1 {"time_micros": 1760153603187443, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.187463) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3527847, prev total WAL file size 3527847, number of live WAL files 2.
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.188559) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3380KB)], [26(6123KB)]
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153603188663, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9731588, "oldest_snapshot_seqno": -1}
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3700 keys, 7954351 bytes, temperature: kUnknown
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153603235782, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7954351, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7925806, "index_size": 18195, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88982, "raw_average_key_size": 24, "raw_value_size": 7855237, "raw_average_value_size": 2123, "num_data_blocks": 788, "num_entries": 3700, "num_filter_entries": 3700, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760153603, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.236012) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7954351 bytes
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.239450) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.2 rd, 168.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4214, records dropped: 514 output_compression: NoCompression
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.239489) EVENT_LOG_v1 {"time_micros": 1760153603239471, "job": 10, "event": "compaction_finished", "compaction_time_micros": 47193, "compaction_time_cpu_micros": 16624, "output_level": 6, "num_output_files": 1, "total_output_size": 7954351, "num_input_records": 4214, "num_output_records": 3700, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153603240941, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153603243008, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.188446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.243065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.243070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.243071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.243073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:33:23.243075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:33:23 np0005480824 kernel: SELinux:  Converting 2767 SID table entries...
Oct 10 23:33:23 np0005480824 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 23:33:23 np0005480824 kernel: SELinux:  policy capability open_perms=1
Oct 10 23:33:23 np0005480824 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 23:33:23 np0005480824 kernel: SELinux:  policy capability always_check_network=0
Oct 10 23:33:23 np0005480824 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 23:33:23 np0005480824 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 23:33:23 np0005480824 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 23:33:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:33:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:24 np0005480824 dbus-broker-launch[738]: Noticed file-system modification, trigger reload.
Oct 10 23:33:24 np0005480824 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 10 23:33:24 np0005480824 dbus-broker-launch[738]: Noticed file-system modification, trigger reload.
Oct 10 23:33:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:33:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:33:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:33:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:33:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:33:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:33:26 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev eb306cdd-02d2-4627-9c7b-ba621887a0a6 does not exist
Oct 10 23:33:26 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 05683408-3a26-4cce-b71c-5831879c4a07 does not exist
Oct 10 23:33:26 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 6a65d5c6-b286-4c2d-a1b1-d3931551e359 does not exist
Oct 10 23:33:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:33:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:33:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:33:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:33:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:33:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:33:27 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:33:27 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:33:27 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:33:27 np0005480824 podman[188206]: 2025-10-11 03:33:27.402531382 +0000 UTC m=+0.054777879 container create 4b54fbf4c1800cdde6955a3879d4d5bf8b2938fd8772f845484546f5ea10dce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:33:27 np0005480824 systemd[1]: Started libpod-conmon-4b54fbf4c1800cdde6955a3879d4d5bf8b2938fd8772f845484546f5ea10dce0.scope.
Oct 10 23:33:27 np0005480824 podman[188206]: 2025-10-11 03:33:27.380461732 +0000 UTC m=+0.032708209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:33:27 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:33:27 np0005480824 podman[188206]: 2025-10-11 03:33:27.50286535 +0000 UTC m=+0.155111817 container init 4b54fbf4c1800cdde6955a3879d4d5bf8b2938fd8772f845484546f5ea10dce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:33:27 np0005480824 podman[188206]: 2025-10-11 03:33:27.512311642 +0000 UTC m=+0.164558099 container start 4b54fbf4c1800cdde6955a3879d4d5bf8b2938fd8772f845484546f5ea10dce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 10 23:33:27 np0005480824 festive_hypatia[188222]: 167 167
Oct 10 23:33:27 np0005480824 systemd[1]: libpod-4b54fbf4c1800cdde6955a3879d4d5bf8b2938fd8772f845484546f5ea10dce0.scope: Deactivated successfully.
Oct 10 23:33:27 np0005480824 podman[188206]: 2025-10-11 03:33:27.522547313 +0000 UTC m=+0.174793780 container attach 4b54fbf4c1800cdde6955a3879d4d5bf8b2938fd8772f845484546f5ea10dce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 23:33:27 np0005480824 podman[188206]: 2025-10-11 03:33:27.523132436 +0000 UTC m=+0.175378913 container died 4b54fbf4c1800cdde6955a3879d4d5bf8b2938fd8772f845484546f5ea10dce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 10 23:33:27 np0005480824 systemd[1]: var-lib-containers-storage-overlay-745b0a269e1a3662321e88ea0aa11862b2ec53eec1b1373ea7f8a6e106e9097b-merged.mount: Deactivated successfully.
Oct 10 23:33:27 np0005480824 podman[188206]: 2025-10-11 03:33:27.60201088 +0000 UTC m=+0.254257337 container remove 4b54fbf4c1800cdde6955a3879d4d5bf8b2938fd8772f845484546f5ea10dce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:33:27 np0005480824 systemd[1]: libpod-conmon-4b54fbf4c1800cdde6955a3879d4d5bf8b2938fd8772f845484546f5ea10dce0.scope: Deactivated successfully.
Oct 10 23:33:27 np0005480824 podman[188247]: 2025-10-11 03:33:27.829360045 +0000 UTC m=+0.071515832 container create 5ee06b0ef4566569fae98f43022e8b6d43c84a8381d53e14689e664ac7428459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bhaskara, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:33:27
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'vms', '.mgr', 'backups', '.rgw.root', 'volumes', 'default.rgw.meta']
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:33:27 np0005480824 systemd[1]: Started libpod-conmon-5ee06b0ef4566569fae98f43022e8b6d43c84a8381d53e14689e664ac7428459.scope.
Oct 10 23:33:27 np0005480824 podman[188247]: 2025-10-11 03:33:27.800085467 +0000 UTC m=+0.042241264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:33:27 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:33:27 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0903f76db634e48d649e6a85949d0b17e7476d10363039f61084b12e67e3d070/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:33:27 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0903f76db634e48d649e6a85949d0b17e7476d10363039f61084b12e67e3d070/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:33:27 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0903f76db634e48d649e6a85949d0b17e7476d10363039f61084b12e67e3d070/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:33:27 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0903f76db634e48d649e6a85949d0b17e7476d10363039f61084b12e67e3d070/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:33:27 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0903f76db634e48d649e6a85949d0b17e7476d10363039f61084b12e67e3d070/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:33:27 np0005480824 podman[188247]: 2025-10-11 03:33:27.944150003 +0000 UTC m=+0.186305830 container init 5ee06b0ef4566569fae98f43022e8b6d43c84a8381d53e14689e664ac7428459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 10 23:33:27 np0005480824 podman[188247]: 2025-10-11 03:33:27.958097881 +0000 UTC m=+0.200253698 container start 5ee06b0ef4566569fae98f43022e8b6d43c84a8381d53e14689e664ac7428459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 23:33:27 np0005480824 podman[188247]: 2025-10-11 03:33:27.967671036 +0000 UTC m=+0.209826823 container attach 5ee06b0ef4566569fae98f43022e8b6d43c84a8381d53e14689e664ac7428459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:33:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:33:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:28 np0005480824 cranky_bhaskara[188264]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:33:28 np0005480824 cranky_bhaskara[188264]: --> relative data size: 1.0
Oct 10 23:33:28 np0005480824 cranky_bhaskara[188264]: --> All data devices are unavailable
Oct 10 23:33:28 np0005480824 systemd[1]: libpod-5ee06b0ef4566569fae98f43022e8b6d43c84a8381d53e14689e664ac7428459.scope: Deactivated successfully.
Oct 10 23:33:28 np0005480824 podman[188247]: 2025-10-11 03:33:28.983941704 +0000 UTC m=+1.226097501 container died 5ee06b0ef4566569fae98f43022e8b6d43c84a8381d53e14689e664ac7428459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 10 23:33:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:33:29 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0903f76db634e48d649e6a85949d0b17e7476d10363039f61084b12e67e3d070-merged.mount: Deactivated successfully.
Oct 10 23:33:29 np0005480824 podman[188247]: 2025-10-11 03:33:29.089008654 +0000 UTC m=+1.331164431 container remove 5ee06b0ef4566569fae98f43022e8b6d43c84a8381d53e14689e664ac7428459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bhaskara, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 23:33:29 np0005480824 systemd[1]: libpod-conmon-5ee06b0ef4566569fae98f43022e8b6d43c84a8381d53e14689e664ac7428459.scope: Deactivated successfully.
Oct 10 23:33:29 np0005480824 podman[188622]: 2025-10-11 03:33:29.747044112 +0000 UTC m=+0.038259640 container create e7a505a2a195a3b88da0a1625bff03bd8049289826cf8dc81f625af4d182f154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:33:29 np0005480824 systemd[1]: Started libpod-conmon-e7a505a2a195a3b88da0a1625bff03bd8049289826cf8dc81f625af4d182f154.scope.
Oct 10 23:33:29 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:33:29 np0005480824 podman[188622]: 2025-10-11 03:33:29.730852132 +0000 UTC m=+0.022067680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:33:29 np0005480824 podman[188622]: 2025-10-11 03:33:29.832741007 +0000 UTC m=+0.123956595 container init e7a505a2a195a3b88da0a1625bff03bd8049289826cf8dc81f625af4d182f154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:33:29 np0005480824 podman[188622]: 2025-10-11 03:33:29.841656497 +0000 UTC m=+0.132872045 container start e7a505a2a195a3b88da0a1625bff03bd8049289826cf8dc81f625af4d182f154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 10 23:33:29 np0005480824 nostalgic_heisenberg[188638]: 167 167
Oct 10 23:33:29 np0005480824 systemd[1]: libpod-e7a505a2a195a3b88da0a1625bff03bd8049289826cf8dc81f625af4d182f154.scope: Deactivated successfully.
Oct 10 23:33:29 np0005480824 podman[188622]: 2025-10-11 03:33:29.871446637 +0000 UTC m=+0.162662215 container attach e7a505a2a195a3b88da0a1625bff03bd8049289826cf8dc81f625af4d182f154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:33:29 np0005480824 podman[188622]: 2025-10-11 03:33:29.872778698 +0000 UTC m=+0.163994246 container died e7a505a2a195a3b88da0a1625bff03bd8049289826cf8dc81f625af4d182f154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:33:29 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ded1bece4a1dd49fa579d2c0b1d758d9d023a675c7a2207365bcfb60fe80eff6-merged.mount: Deactivated successfully.
Oct 10 23:33:29 np0005480824 podman[188622]: 2025-10-11 03:33:29.973576847 +0000 UTC m=+0.264792415 container remove e7a505a2a195a3b88da0a1625bff03bd8049289826cf8dc81f625af4d182f154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:33:29 np0005480824 systemd[1]: libpod-conmon-e7a505a2a195a3b88da0a1625bff03bd8049289826cf8dc81f625af4d182f154.scope: Deactivated successfully.
Oct 10 23:33:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:30 np0005480824 podman[188661]: 2025-10-11 03:33:30.137804608 +0000 UTC m=+0.042647254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:33:30 np0005480824 podman[188661]: 2025-10-11 03:33:30.259427736 +0000 UTC m=+0.164270352 container create dae6ff38e36c5973ea8b37698ea7c79c82eea735158f6b3974d486290e221bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:33:30 np0005480824 systemd[1]: Started libpod-conmon-dae6ff38e36c5973ea8b37698ea7c79c82eea735158f6b3974d486290e221bb6.scope.
Oct 10 23:33:30 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:33:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a8ac7ad6961dfbffa33711fbac660032157120919a768a513ae13e21656a52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:33:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a8ac7ad6961dfbffa33711fbac660032157120919a768a513ae13e21656a52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:33:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a8ac7ad6961dfbffa33711fbac660032157120919a768a513ae13e21656a52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:33:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a8ac7ad6961dfbffa33711fbac660032157120919a768a513ae13e21656a52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:33:30 np0005480824 podman[188661]: 2025-10-11 03:33:30.378756902 +0000 UTC m=+0.283599598 container init dae6ff38e36c5973ea8b37698ea7c79c82eea735158f6b3974d486290e221bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:33:30 np0005480824 podman[188661]: 2025-10-11 03:33:30.386938044 +0000 UTC m=+0.291780700 container start dae6ff38e36c5973ea8b37698ea7c79c82eea735158f6b3974d486290e221bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:33:30 np0005480824 podman[188661]: 2025-10-11 03:33:30.391893341 +0000 UTC m=+0.296735977 container attach dae6ff38e36c5973ea8b37698ea7c79c82eea735158f6b3974d486290e221bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]: {
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:    "0": [
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:        {
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "devices": [
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "/dev/loop3"
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            ],
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_name": "ceph_lv0",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_size": "21470642176",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "name": "ceph_lv0",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "tags": {
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.cluster_name": "ceph",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.crush_device_class": "",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.encrypted": "0",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.osd_id": "0",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.type": "block",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.vdo": "0"
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            },
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "type": "block",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "vg_name": "ceph_vg0"
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:        }
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:    ],
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:    "1": [
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:        {
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "devices": [
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "/dev/loop4"
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            ],
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_name": "ceph_lv1",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_size": "21470642176",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "name": "ceph_lv1",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "tags": {
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.cluster_name": "ceph",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.crush_device_class": "",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.encrypted": "0",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.osd_id": "1",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.type": "block",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.vdo": "0"
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            },
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "type": "block",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "vg_name": "ceph_vg1"
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:        }
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:    ],
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:    "2": [
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:        {
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "devices": [
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "/dev/loop5"
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            ],
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_name": "ceph_lv2",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_size": "21470642176",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "name": "ceph_lv2",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "tags": {
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.cluster_name": "ceph",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.crush_device_class": "",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.encrypted": "0",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.osd_id": "2",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.type": "block",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:                "ceph.vdo": "0"
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            },
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "type": "block",
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:            "vg_name": "ceph_vg2"
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:        }
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]:    ]
Oct 10 23:33:31 np0005480824 jovial_johnson[188677]: }
Oct 10 23:33:31 np0005480824 systemd[1]: libpod-dae6ff38e36c5973ea8b37698ea7c79c82eea735158f6b3974d486290e221bb6.scope: Deactivated successfully.
Oct 10 23:33:31 np0005480824 podman[188661]: 2025-10-11 03:33:31.136897323 +0000 UTC m=+1.041739979 container died dae6ff38e36c5973ea8b37698ea7c79c82eea735158f6b3974d486290e221bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_johnson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:33:31 np0005480824 systemd[1]: var-lib-containers-storage-overlay-80a8ac7ad6961dfbffa33711fbac660032157120919a768a513ae13e21656a52-merged.mount: Deactivated successfully.
Oct 10 23:33:31 np0005480824 podman[188661]: 2025-10-11 03:33:31.198750097 +0000 UTC m=+1.103592713 container remove dae6ff38e36c5973ea8b37698ea7c79c82eea735158f6b3974d486290e221bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_johnson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 10 23:33:31 np0005480824 systemd[1]: libpod-conmon-dae6ff38e36c5973ea8b37698ea7c79c82eea735158f6b3974d486290e221bb6.scope: Deactivated successfully.
Oct 10 23:33:31 np0005480824 podman[189116]: 2025-10-11 03:33:31.803780159 +0000 UTC m=+0.039640323 container create c56e78197f746bae16a4e61106a3713ca0a887eea952c3dc3d5435100fbb14e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:33:31 np0005480824 systemd[1]: Started libpod-conmon-c56e78197f746bae16a4e61106a3713ca0a887eea952c3dc3d5435100fbb14e9.scope.
Oct 10 23:33:31 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:33:31 np0005480824 podman[189116]: 2025-10-11 03:33:31.786674617 +0000 UTC m=+0.022534791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:33:31 np0005480824 podman[189116]: 2025-10-11 03:33:31.886848032 +0000 UTC m=+0.122708226 container init c56e78197f746bae16a4e61106a3713ca0a887eea952c3dc3d5435100fbb14e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_aryabhata, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:33:31 np0005480824 podman[189116]: 2025-10-11 03:33:31.895816612 +0000 UTC m=+0.131676766 container start c56e78197f746bae16a4e61106a3713ca0a887eea952c3dc3d5435100fbb14e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_aryabhata, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 10 23:33:31 np0005480824 podman[189116]: 2025-10-11 03:33:31.898846823 +0000 UTC m=+0.134706967 container attach c56e78197f746bae16a4e61106a3713ca0a887eea952c3dc3d5435100fbb14e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 23:33:31 np0005480824 upbeat_aryabhata[189195]: 167 167
Oct 10 23:33:31 np0005480824 systemd[1]: libpod-c56e78197f746bae16a4e61106a3713ca0a887eea952c3dc3d5435100fbb14e9.scope: Deactivated successfully.
Oct 10 23:33:31 np0005480824 podman[189116]: 2025-10-11 03:33:31.901410714 +0000 UTC m=+0.137270878 container died c56e78197f746bae16a4e61106a3713ca0a887eea952c3dc3d5435100fbb14e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:33:31 np0005480824 systemd[1]: var-lib-containers-storage-overlay-389295b4e2f0bca00fc284dad57e185b85cd8a781dc0650192a0bc2c55b4e1cc-merged.mount: Deactivated successfully.
Oct 10 23:33:31 np0005480824 podman[189116]: 2025-10-11 03:33:31.938480136 +0000 UTC m=+0.174340290 container remove c56e78197f746bae16a4e61106a3713ca0a887eea952c3dc3d5435100fbb14e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:33:31 np0005480824 systemd[1]: libpod-conmon-c56e78197f746bae16a4e61106a3713ca0a887eea952c3dc3d5435100fbb14e9.scope: Deactivated successfully.
Oct 10 23:33:32 np0005480824 podman[189348]: 2025-10-11 03:33:32.08378487 +0000 UTC m=+0.038583047 container create cf1f88023234471d6a59353b823dc51b3a8546ce9bccaf773f1feee55a23c6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:33:32 np0005480824 systemd[1]: Started libpod-conmon-cf1f88023234471d6a59353b823dc51b3a8546ce9bccaf773f1feee55a23c6b9.scope.
Oct 10 23:33:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:32 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:33:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9870229d32d2d8d6dc0662f6a47b9be6b1e5edce2393b02fc63d6b037950075/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:33:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9870229d32d2d8d6dc0662f6a47b9be6b1e5edce2393b02fc63d6b037950075/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:33:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9870229d32d2d8d6dc0662f6a47b9be6b1e5edce2393b02fc63d6b037950075/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:33:32 np0005480824 podman[189348]: 2025-10-11 03:33:32.067046017 +0000 UTC m=+0.021844224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:33:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9870229d32d2d8d6dc0662f6a47b9be6b1e5edce2393b02fc63d6b037950075/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:33:32 np0005480824 podman[189348]: 2025-10-11 03:33:32.177503373 +0000 UTC m=+0.132301570 container init cf1f88023234471d6a59353b823dc51b3a8546ce9bccaf773f1feee55a23c6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:33:32 np0005480824 podman[189348]: 2025-10-11 03:33:32.185906061 +0000 UTC m=+0.140704238 container start cf1f88023234471d6a59353b823dc51b3a8546ce9bccaf773f1feee55a23c6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 23:33:32 np0005480824 podman[189348]: 2025-10-11 03:33:32.189155768 +0000 UTC m=+0.143953965 container attach cf1f88023234471d6a59353b823dc51b3a8546ce9bccaf773f1feee55a23c6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:33:32 np0005480824 systemd[1]: Stopping OpenSSH server daemon...
Oct 10 23:33:32 np0005480824 systemd[1]: sshd.service: Deactivated successfully.
Oct 10 23:33:32 np0005480824 systemd[1]: Stopped OpenSSH server daemon.
Oct 10 23:33:32 np0005480824 systemd[1]: sshd.service: Consumed 3.063s CPU time, no IO.
Oct 10 23:33:32 np0005480824 systemd[1]: Stopped target sshd-keygen.target.
Oct 10 23:33:32 np0005480824 systemd[1]: Stopping sshd-keygen.target...
Oct 10 23:33:32 np0005480824 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 23:33:32 np0005480824 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 23:33:32 np0005480824 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 23:33:32 np0005480824 systemd[1]: Reached target sshd-keygen.target.
Oct 10 23:33:32 np0005480824 systemd[1]: Starting OpenSSH server daemon...
Oct 10 23:33:32 np0005480824 systemd[1]: Started OpenSSH server daemon.
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]: {
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "osd_id": 0,
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "type": "bluestore"
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:    },
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "osd_id": 1,
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "type": "bluestore"
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:    },
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "osd_id": 2,
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:        "type": "bluestore"
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]:    }
Oct 10 23:33:33 np0005480824 gallant_liskov[189419]: }
Oct 10 23:33:33 np0005480824 podman[189348]: 2025-10-11 03:33:33.130909755 +0000 UTC m=+1.085707932 container died cf1f88023234471d6a59353b823dc51b3a8546ce9bccaf773f1feee55a23c6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 10 23:33:33 np0005480824 systemd[1]: libpod-cf1f88023234471d6a59353b823dc51b3a8546ce9bccaf773f1feee55a23c6b9.scope: Deactivated successfully.
Oct 10 23:33:33 np0005480824 systemd[1]: var-lib-containers-storage-overlay-f9870229d32d2d8d6dc0662f6a47b9be6b1e5edce2393b02fc63d6b037950075-merged.mount: Deactivated successfully.
Oct 10 23:33:33 np0005480824 podman[189348]: 2025-10-11 03:33:33.189617335 +0000 UTC m=+1.144415502 container remove cf1f88023234471d6a59353b823dc51b3a8546ce9bccaf773f1feee55a23c6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:33:33 np0005480824 systemd[1]: libpod-conmon-cf1f88023234471d6a59353b823dc51b3a8546ce9bccaf773f1feee55a23c6b9.scope: Deactivated successfully.
Oct 10 23:33:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:33:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:33:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:33:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:33:33 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev c0fef53f-dc5a-4fca-bde4-9d6d4a0ac933 does not exist
Oct 10 23:33:33 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 5168957b-2b03-4811-97b8-7fcfe70154e8 does not exist
Oct 10 23:33:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:33:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:34 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:33:34 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:33:34 np0005480824 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 23:33:34 np0005480824 systemd[1]: Starting man-db-cache-update.service...
Oct 10 23:33:34 np0005480824 systemd[1]: Reloading.
Oct 10 23:33:34 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:33:34 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:33:34 np0005480824 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 23:33:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:36 np0005480824 systemd[1]: Starting PackageKit Daemon...
Oct 10 23:33:36 np0005480824 systemd[1]: Started PackageKit Daemon.
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:33:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:33:38 np0005480824 podman[193154]: 2025-10-11 03:33:38.068804957 +0000 UTC m=+0.121182430 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:33:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:38 np0005480824 python3.9[193369]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 23:33:38 np0005480824 systemd[1]: Reloading.
Oct 10 23:33:38 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:33:38 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:33:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:33:39 np0005480824 auditd[700]: Audit daemon rotating log files
Oct 10 23:33:39 np0005480824 python3.9[194521]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 23:33:39 np0005480824 systemd[1]: Reloading.
Oct 10 23:33:39 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:33:39 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:33:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:40 np0005480824 python3.9[195690]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 23:33:40 np0005480824 systemd[1]: Reloading.
Oct 10 23:33:40 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:33:40 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:33:41 np0005480824 python3.9[196985]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 23:33:41 np0005480824 systemd[1]: Reloading.
Oct 10 23:33:42 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:33:42 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:33:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:42 np0005480824 podman[197756]: 2025-10-11 03:33:42.306281334 +0000 UTC m=+0.059149891 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 23:33:43 np0005480824 python3.9[198466]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:43 np0005480824 systemd[1]: Reloading.
Oct 10 23:33:43 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:33:43 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:33:43 np0005480824 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 23:33:43 np0005480824 systemd[1]: Finished man-db-cache-update.service.
Oct 10 23:33:43 np0005480824 systemd[1]: man-db-cache-update.service: Consumed 11.162s CPU time.
Oct 10 23:33:43 np0005480824 systemd[1]: run-r3675a7d18bc54877a9145d87089fda10.service: Deactivated successfully.
Oct 10 23:33:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:33:44 np0005480824 python3.9[199257]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:44 np0005480824 systemd[1]: Reloading.
Oct 10 23:33:44 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:33:44 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:33:45 np0005480824 python3.9[199447]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:45 np0005480824 systemd[1]: Reloading.
Oct 10 23:33:45 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:33:45 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:33:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:46 np0005480824 python3.9[199636]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:47 np0005480824 python3.9[199791]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:47 np0005480824 systemd[1]: Reloading.
Oct 10 23:33:47 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:33:47 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:33:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:48 np0005480824 python3.9[199981]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 23:33:48 np0005480824 systemd[1]: Reloading.
Oct 10 23:33:48 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:33:48 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:33:48 np0005480824 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 10 23:33:48 np0005480824 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 10 23:33:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:33:49 np0005480824 python3.9[200174]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:50 np0005480824 python3.9[200329]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:51 np0005480824 python3.9[200484]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:52 np0005480824 python3.9[200639]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:52 np0005480824 python3.9[200794]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:53 np0005480824 python3.9[200949]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:33:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:54 np0005480824 python3.9[201104]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:55 np0005480824 python3.9[201259]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:56 np0005480824 python3.9[201414]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:57 np0005480824 python3.9[201569]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:33:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:33:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:33:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:33:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:33:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:33:58 np0005480824 python3.9[201724]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:33:58 np0005480824 python3.9[201879]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:33:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:33:59 np0005480824 python3.9[202034]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:34:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:00 np0005480824 python3.9[202189]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 23:34:01 np0005480824 python3.9[202344]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:34:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:02 np0005480824 python3.9[202496]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:34:03 np0005480824 python3.9[202648]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:34:03 np0005480824 python3.9[202800]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:34:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:34:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:04 np0005480824 python3.9[202952]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:34:05 np0005480824 python3.9[203104]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:34:05 np0005480824 python3.9[203256]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:06 np0005480824 python3.9[203381]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760153645.2442722-554-138020342791180/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:07 np0005480824 python3.9[203533]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:08 np0005480824 python3.9[203658]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760153646.9908683-554-90970568139861/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:08 np0005480824 podman[203659]: 2025-10-11 03:34:08.322578335 +0000 UTC m=+0.095607469 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 10 23:34:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:34:09 np0005480824 python3.9[203835]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:09 np0005480824 python3.9[203960]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760153648.38214-554-225839812589707/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:10 np0005480824 python3.9[204112]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:34:10.471 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:34:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:34:10.471 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:34:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:34:10.471 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:34:10 np0005480824 python3.9[204237]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760153649.8090782-554-267932531620180/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:11 np0005480824 python3.9[204389]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:12 np0005480824 python3.9[204514]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760153651.1162584-554-237793847075975/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:12 np0005480824 podman[204638]: 2025-10-11 03:34:12.678861065 +0000 UTC m=+0.066668429 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 10 23:34:12 np0005480824 python3.9[204685]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:13 np0005480824 python3.9[204810]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760153652.3450916-554-63167014032944/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:34:14 np0005480824 python3.9[204962]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:14 np0005480824 python3.9[205085]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760153653.6231306-554-140638446842275/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:15 np0005480824 python3.9[205237]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:15 np0005480824 python3.9[205362]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760153654.7968726-554-11083745851590/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:16 np0005480824 python3.9[205514]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 10 23:34:17 np0005480824 python3.9[205667]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:17 np0005480824 python3.9[205819]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:18 np0005480824 python3.9[205971]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:34:19 np0005480824 python3.9[206123]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:19 np0005480824 python3.9[206275]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:20 np0005480824 python3.9[206427]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:21 np0005480824 python3.9[206579]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:21 np0005480824 python3.9[206731]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:22 np0005480824 python3.9[206883]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:23 np0005480824 python3.9[207035]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:23 np0005480824 python3.9[207187]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:34:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:24 np0005480824 python3.9[207339]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:25 np0005480824 python3.9[207491]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:25 np0005480824 python3.9[207643]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:26 np0005480824 python3.9[207795]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:26 np0005480824 python3.9[207918]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153665.846631-775-134786921292884/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:27 np0005480824 python3.9[208070]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:34:27
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'backups', 'default.rgw.log', 'images', 'cephfs.cephfs.data']
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:34:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:34:28 np0005480824 python3.9[208193]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153667.156866-775-44558711385882/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:28 np0005480824 python3.9[208345]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:34:29 np0005480824 python3.9[208468]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153668.2851086-775-50155416057403/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:29 np0005480824 python3.9[208620]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:30 np0005480824 python3.9[208743]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153669.4803014-775-139672000687912/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:31 np0005480824 python3.9[208895]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:31 np0005480824 python3.9[209018]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153670.7825172-775-16789094029587/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:32 np0005480824 python3.9[209170]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:33 np0005480824 python3.9[209293]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153672.0301013-775-264348020395836/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:34:34 np0005480824 python3.9[209445]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:34 np0005480824 python3.9[209683]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153673.4078317-775-145638008851406/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:34:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:34:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:34:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:34:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:34:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:34:34 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 180830be-d97d-4d28-87a2-65b6c0d3cd9d does not exist
Oct 10 23:34:34 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 290d0290-c0a8-49d7-a2c8-44fd63e690dc does not exist
Oct 10 23:34:34 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 0e30ec69-713e-4081-b1c8-0e24e2aa1887 does not exist
Oct 10 23:34:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:34:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:34:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:34:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:34:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:34:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:34:35 np0005480824 python3.9[209952]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:35 np0005480824 podman[209994]: 2025-10-11 03:34:35.339892204 +0000 UTC m=+0.041394565 container create 7e8acf709d15f28693b82a46390e55f1929972daac667a6ca7302b1211827cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:34:35 np0005480824 systemd[1]: Started libpod-conmon-7e8acf709d15f28693b82a46390e55f1929972daac667a6ca7302b1211827cb7.scope.
Oct 10 23:34:35 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:34:35 np0005480824 podman[209994]: 2025-10-11 03:34:35.319735199 +0000 UTC m=+0.021237550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:34:35 np0005480824 podman[209994]: 2025-10-11 03:34:35.416287389 +0000 UTC m=+0.117789770 container init 7e8acf709d15f28693b82a46390e55f1929972daac667a6ca7302b1211827cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:34:35 np0005480824 podman[209994]: 2025-10-11 03:34:35.425950526 +0000 UTC m=+0.127452857 container start 7e8acf709d15f28693b82a46390e55f1929972daac667a6ca7302b1211827cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 10 23:34:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:34:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:34:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:34:35 np0005480824 podman[209994]: 2025-10-11 03:34:35.433014212 +0000 UTC m=+0.134516583 container attach 7e8acf709d15f28693b82a46390e55f1929972daac667a6ca7302b1211827cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:34:35 np0005480824 suspicious_mayer[210034]: 167 167
Oct 10 23:34:35 np0005480824 systemd[1]: libpod-7e8acf709d15f28693b82a46390e55f1929972daac667a6ca7302b1211827cb7.scope: Deactivated successfully.
Oct 10 23:34:35 np0005480824 podman[209994]: 2025-10-11 03:34:35.435342487 +0000 UTC m=+0.136844828 container died 7e8acf709d15f28693b82a46390e55f1929972daac667a6ca7302b1211827cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:34:35 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3fcff3e13ae4ac3dcb1072ada251943bfc5d46082785a088ea3aa3fbdebc1c20-merged.mount: Deactivated successfully.
Oct 10 23:34:35 np0005480824 podman[209994]: 2025-10-11 03:34:35.469875039 +0000 UTC m=+0.171377380 container remove 7e8acf709d15f28693b82a46390e55f1929972daac667a6ca7302b1211827cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:34:35 np0005480824 systemd[1]: libpod-conmon-7e8acf709d15f28693b82a46390e55f1929972daac667a6ca7302b1211827cb7.scope: Deactivated successfully.
Oct 10 23:34:35 np0005480824 podman[210128]: 2025-10-11 03:34:35.642256091 +0000 UTC m=+0.044595259 container create 85c13df2e9f17eaff89e8639549c4f187cfeebdf1497f10bd68aad89e852d6e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sanderson, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 23:34:35 np0005480824 systemd[1]: Started libpod-conmon-85c13df2e9f17eaff89e8639549c4f187cfeebdf1497f10bd68aad89e852d6e2.scope.
Oct 10 23:34:35 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:34:35 np0005480824 podman[210128]: 2025-10-11 03:34:35.624417222 +0000 UTC m=+0.026756400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:34:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e3108e666376f39336d4e3d43e4f0871562bfd6bd2e7f19f509d8d4be99f39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:34:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e3108e666376f39336d4e3d43e4f0871562bfd6bd2e7f19f509d8d4be99f39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:34:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e3108e666376f39336d4e3d43e4f0871562bfd6bd2e7f19f509d8d4be99f39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:34:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e3108e666376f39336d4e3d43e4f0871562bfd6bd2e7f19f509d8d4be99f39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:34:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e3108e666376f39336d4e3d43e4f0871562bfd6bd2e7f19f509d8d4be99f39/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:34:35 np0005480824 podman[210128]: 2025-10-11 03:34:35.737745476 +0000 UTC m=+0.140084704 container init 85c13df2e9f17eaff89e8639549c4f187cfeebdf1497f10bd68aad89e852d6e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 10 23:34:35 np0005480824 podman[210128]: 2025-10-11 03:34:35.744647118 +0000 UTC m=+0.146986276 container start 85c13df2e9f17eaff89e8639549c4f187cfeebdf1497f10bd68aad89e852d6e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:34:35 np0005480824 podman[210128]: 2025-10-11 03:34:35.74769337 +0000 UTC m=+0.150032608 container attach 85c13df2e9f17eaff89e8639549c4f187cfeebdf1497f10bd68aad89e852d6e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:34:35 np0005480824 python3.9[210176]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153674.7879958-775-206955983515293/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:36 np0005480824 python3.9[210338]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:36 np0005480824 beautiful_sanderson[210172]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:34:36 np0005480824 beautiful_sanderson[210172]: --> relative data size: 1.0
Oct 10 23:34:36 np0005480824 beautiful_sanderson[210172]: --> All data devices are unavailable
Oct 10 23:34:36 np0005480824 systemd[1]: libpod-85c13df2e9f17eaff89e8639549c4f187cfeebdf1497f10bd68aad89e852d6e2.scope: Deactivated successfully.
Oct 10 23:34:36 np0005480824 podman[210128]: 2025-10-11 03:34:36.762888073 +0000 UTC m=+1.165227231 container died 85c13df2e9f17eaff89e8639549c4f187cfeebdf1497f10bd68aad89e852d6e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:34:36 np0005480824 systemd[1]: var-lib-containers-storage-overlay-40e3108e666376f39336d4e3d43e4f0871562bfd6bd2e7f19f509d8d4be99f39-merged.mount: Deactivated successfully.
Oct 10 23:34:36 np0005480824 podman[210128]: 2025-10-11 03:34:36.82228875 +0000 UTC m=+1.224627908 container remove 85c13df2e9f17eaff89e8639549c4f187cfeebdf1497f10bd68aad89e852d6e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sanderson, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 23:34:36 np0005480824 systemd[1]: libpod-conmon-85c13df2e9f17eaff89e8639549c4f187cfeebdf1497f10bd68aad89e852d6e2.scope: Deactivated successfully.
Oct 10 23:34:37 np0005480824 python3.9[210560]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153676.129397-775-133449712751981/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:37 np0005480824 podman[210676]: 2025-10-11 03:34:37.574999583 +0000 UTC m=+0.065789758 container create d12d3c5a7888cee54c4daf3736bdcc919ca8dca63b092cb4bfd77bd4d49115a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_leakey, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:34:37 np0005480824 systemd[1]: Started libpod-conmon-d12d3c5a7888cee54c4daf3736bdcc919ca8dca63b092cb4bfd77bd4d49115a8.scope.
Oct 10 23:34:37 np0005480824 podman[210676]: 2025-10-11 03:34:37.548940181 +0000 UTC m=+0.039730416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:34:37 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:34:37 np0005480824 podman[210676]: 2025-10-11 03:34:37.665734276 +0000 UTC m=+0.156524441 container init d12d3c5a7888cee54c4daf3736bdcc919ca8dca63b092cb4bfd77bd4d49115a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 23:34:37 np0005480824 podman[210676]: 2025-10-11 03:34:37.678029875 +0000 UTC m=+0.168820020 container start d12d3c5a7888cee54c4daf3736bdcc919ca8dca63b092cb4bfd77bd4d49115a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_leakey, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:34:37 np0005480824 sleepy_leakey[210746]: 167 167
Oct 10 23:34:37 np0005480824 systemd[1]: libpod-d12d3c5a7888cee54c4daf3736bdcc919ca8dca63b092cb4bfd77bd4d49115a8.scope: Deactivated successfully.
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:34:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:34:37 np0005480824 podman[210676]: 2025-10-11 03:34:37.694970983 +0000 UTC m=+0.185761158 container attach d12d3c5a7888cee54c4daf3736bdcc919ca8dca63b092cb4bfd77bd4d49115a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_leakey, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:34:37 np0005480824 podman[210676]: 2025-10-11 03:34:37.695297191 +0000 UTC m=+0.186087346 container died d12d3c5a7888cee54c4daf3736bdcc919ca8dca63b092cb4bfd77bd4d49115a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:34:37 np0005480824 systemd[1]: var-lib-containers-storage-overlay-918308a77394419de6f9ac48f2d2bbe6ff4f142f75e7b129ba8d9b2c42cfe62d-merged.mount: Deactivated successfully.
Oct 10 23:34:37 np0005480824 podman[210676]: 2025-10-11 03:34:37.729116456 +0000 UTC m=+0.219906601 container remove d12d3c5a7888cee54c4daf3736bdcc919ca8dca63b092cb4bfd77bd4d49115a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 10 23:34:37 np0005480824 systemd[1]: libpod-conmon-d12d3c5a7888cee54c4daf3736bdcc919ca8dca63b092cb4bfd77bd4d49115a8.scope: Deactivated successfully.
Oct 10 23:34:37 np0005480824 podman[210823]: 2025-10-11 03:34:37.923522606 +0000 UTC m=+0.059039329 container create 18be08b6044c2d24b66639a8a7ccd6e972a38fefa2eb47852bf0b8e532dfbc88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 10 23:34:37 np0005480824 systemd[1]: Started libpod-conmon-18be08b6044c2d24b66639a8a7ccd6e972a38fefa2eb47852bf0b8e532dfbc88.scope.
Oct 10 23:34:37 np0005480824 podman[210823]: 2025-10-11 03:34:37.892337943 +0000 UTC m=+0.027854716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:34:37 np0005480824 python3.9[210817]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:38 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:34:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab5779928445cad4577aefd2455af7895dea67d7cbc0630d5d6c6ea324c2aca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:34:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab5779928445cad4577aefd2455af7895dea67d7cbc0630d5d6c6ea324c2aca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:34:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab5779928445cad4577aefd2455af7895dea67d7cbc0630d5d6c6ea324c2aca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:34:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab5779928445cad4577aefd2455af7895dea67d7cbc0630d5d6c6ea324c2aca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:34:38 np0005480824 podman[210823]: 2025-10-11 03:34:38.048633676 +0000 UTC m=+0.184150369 container init 18be08b6044c2d24b66639a8a7ccd6e972a38fefa2eb47852bf0b8e532dfbc88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 23:34:38 np0005480824 podman[210823]: 2025-10-11 03:34:38.062468771 +0000 UTC m=+0.197985494 container start 18be08b6044c2d24b66639a8a7ccd6e972a38fefa2eb47852bf0b8e532dfbc88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_burnell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:34:38 np0005480824 podman[210823]: 2025-10-11 03:34:38.066927527 +0000 UTC m=+0.202444240 container attach 18be08b6044c2d24b66639a8a7ccd6e972a38fefa2eb47852bf0b8e532dfbc88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_burnell, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:34:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:38 np0005480824 podman[210966]: 2025-10-11 03:34:38.536491194 +0000 UTC m=+0.142368267 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 10 23:34:38 np0005480824 python3.9[210967]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153677.497501-775-122679118814431/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]: {
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:    "0": [
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:        {
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "devices": [
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "/dev/loop3"
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            ],
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_name": "ceph_lv0",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_size": "21470642176",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "name": "ceph_lv0",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "tags": {
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.cluster_name": "ceph",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.crush_device_class": "",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.encrypted": "0",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.osd_id": "0",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.type": "block",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.vdo": "0"
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            },
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "type": "block",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "vg_name": "ceph_vg0"
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:        }
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:    ],
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:    "1": [
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:        {
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "devices": [
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "/dev/loop4"
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            ],
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_name": "ceph_lv1",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_size": "21470642176",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "name": "ceph_lv1",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "tags": {
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.cluster_name": "ceph",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.crush_device_class": "",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.encrypted": "0",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.osd_id": "1",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.type": "block",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.vdo": "0"
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            },
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "type": "block",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "vg_name": "ceph_vg1"
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:        }
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:    ],
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:    "2": [
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:        {
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "devices": [
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "/dev/loop5"
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            ],
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_name": "ceph_lv2",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_size": "21470642176",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "name": "ceph_lv2",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "tags": {
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.cluster_name": "ceph",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.crush_device_class": "",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.encrypted": "0",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.osd_id": "2",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.type": "block",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:                "ceph.vdo": "0"
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            },
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "type": "block",
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:            "vg_name": "ceph_vg2"
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:        }
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]:    ]
Oct 10 23:34:38 np0005480824 lucid_burnell[210839]: }
Oct 10 23:34:38 np0005480824 systemd[1]: libpod-18be08b6044c2d24b66639a8a7ccd6e972a38fefa2eb47852bf0b8e532dfbc88.scope: Deactivated successfully.
Oct 10 23:34:38 np0005480824 podman[210823]: 2025-10-11 03:34:38.855770409 +0000 UTC m=+0.991287132 container died 18be08b6044c2d24b66639a8a7ccd6e972a38fefa2eb47852bf0b8e532dfbc88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 23:34:38 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9ab5779928445cad4577aefd2455af7895dea67d7cbc0630d5d6c6ea324c2aca-merged.mount: Deactivated successfully.
Oct 10 23:34:38 np0005480824 podman[210823]: 2025-10-11 03:34:38.917808197 +0000 UTC m=+1.053324880 container remove 18be08b6044c2d24b66639a8a7ccd6e972a38fefa2eb47852bf0b8e532dfbc88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 23:34:38 np0005480824 systemd[1]: libpod-conmon-18be08b6044c2d24b66639a8a7ccd6e972a38fefa2eb47852bf0b8e532dfbc88.scope: Deactivated successfully.
Oct 10 23:34:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:34:39 np0005480824 python3.9[211186]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:39 np0005480824 podman[211396]: 2025-10-11 03:34:39.605140744 +0000 UTC m=+0.071200125 container create 38924c09c5d0af4f9086bf2837c223cdc0231013228a2e721ba08ed2a3bc739f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lumiere, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 10 23:34:39 np0005480824 systemd[1]: Started libpod-conmon-38924c09c5d0af4f9086bf2837c223cdc0231013228a2e721ba08ed2a3bc739f.scope.
Oct 10 23:34:39 np0005480824 podman[211396]: 2025-10-11 03:34:39.559621925 +0000 UTC m=+0.025681336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:34:39 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:34:39 np0005480824 podman[211396]: 2025-10-11 03:34:39.679055662 +0000 UTC m=+0.145115073 container init 38924c09c5d0af4f9086bf2837c223cdc0231013228a2e721ba08ed2a3bc739f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lumiere, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 10 23:34:39 np0005480824 podman[211396]: 2025-10-11 03:34:39.686354813 +0000 UTC m=+0.152414194 container start 38924c09c5d0af4f9086bf2837c223cdc0231013228a2e721ba08ed2a3bc739f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 23:34:39 np0005480824 podman[211396]: 2025-10-11 03:34:39.69004542 +0000 UTC m=+0.156104801 container attach 38924c09c5d0af4f9086bf2837c223cdc0231013228a2e721ba08ed2a3bc739f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:34:39 np0005480824 upbeat_lumiere[211441]: 167 167
Oct 10 23:34:39 np0005480824 systemd[1]: libpod-38924c09c5d0af4f9086bf2837c223cdc0231013228a2e721ba08ed2a3bc739f.scope: Deactivated successfully.
Oct 10 23:34:39 np0005480824 podman[211396]: 2025-10-11 03:34:39.691935174 +0000 UTC m=+0.157994555 container died 38924c09c5d0af4f9086bf2837c223cdc0231013228a2e721ba08ed2a3bc739f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lumiere, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:34:39 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e5dfa17fe91a23ca4dd6304f4f9f2891ed39515f152863bdd48988afa9f04a73-merged.mount: Deactivated successfully.
Oct 10 23:34:39 np0005480824 podman[211396]: 2025-10-11 03:34:39.725378821 +0000 UTC m=+0.191438202 container remove 38924c09c5d0af4f9086bf2837c223cdc0231013228a2e721ba08ed2a3bc739f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:34:39 np0005480824 systemd[1]: libpod-conmon-38924c09c5d0af4f9086bf2837c223cdc0231013228a2e721ba08ed2a3bc739f.scope: Deactivated successfully.
Oct 10 23:34:39 np0005480824 python3.9[211438]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153678.7509694-775-126420378017483/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:39 np0005480824 podman[211470]: 2025-10-11 03:34:39.925314131 +0000 UTC m=+0.066223728 container create de1346da6f97d58a88eaba3e88588103b7830d631c8131637fc9d363f0dd9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 23:34:39 np0005480824 systemd[1]: Started libpod-conmon-de1346da6f97d58a88eaba3e88588103b7830d631c8131637fc9d363f0dd9598.scope.
Oct 10 23:34:39 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:34:39 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f20319b8df89e3eb74b1286a53fc7f2df6828a00e7d44bc7d19244b73a262c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:34:39 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f20319b8df89e3eb74b1286a53fc7f2df6828a00e7d44bc7d19244b73a262c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:34:39 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f20319b8df89e3eb74b1286a53fc7f2df6828a00e7d44bc7d19244b73a262c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:34:39 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f20319b8df89e3eb74b1286a53fc7f2df6828a00e7d44bc7d19244b73a262c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:34:39 np0005480824 podman[211470]: 2025-10-11 03:34:39.901574603 +0000 UTC m=+0.042484240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:34:40 np0005480824 podman[211470]: 2025-10-11 03:34:40.004153434 +0000 UTC m=+0.145063061 container init de1346da6f97d58a88eaba3e88588103b7830d631c8131637fc9d363f0dd9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 10 23:34:40 np0005480824 podman[211470]: 2025-10-11 03:34:40.015253175 +0000 UTC m=+0.156162802 container start de1346da6f97d58a88eaba3e88588103b7830d631c8131637fc9d363f0dd9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 10 23:34:40 np0005480824 podman[211470]: 2025-10-11 03:34:40.019374742 +0000 UTC m=+0.160284339 container attach de1346da6f97d58a88eaba3e88588103b7830d631c8131637fc9d363f0dd9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:34:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:40 np0005480824 python3.9[211637]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:41 np0005480824 python3.9[211774]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153679.9556234-775-132839564410248/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]: {
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "osd_id": 0,
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "type": "bluestore"
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:    },
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "osd_id": 1,
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "type": "bluestore"
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:    },
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "osd_id": 2,
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:        "type": "bluestore"
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]:    }
Oct 10 23:34:41 np0005480824 exciting_visvesvaraya[211528]: }
Oct 10 23:34:41 np0005480824 systemd[1]: libpod-de1346da6f97d58a88eaba3e88588103b7830d631c8131637fc9d363f0dd9598.scope: Deactivated successfully.
Oct 10 23:34:41 np0005480824 systemd[1]: libpod-de1346da6f97d58a88eaba3e88588103b7830d631c8131637fc9d363f0dd9598.scope: Consumed 1.071s CPU time.
Oct 10 23:34:41 np0005480824 podman[211470]: 2025-10-11 03:34:41.084302525 +0000 UTC m=+1.225212142 container died de1346da6f97d58a88eaba3e88588103b7830d631c8131637fc9d363f0dd9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 10 23:34:41 np0005480824 systemd[1]: var-lib-containers-storage-overlay-78f20319b8df89e3eb74b1286a53fc7f2df6828a00e7d44bc7d19244b73a262c-merged.mount: Deactivated successfully.
Oct 10 23:34:41 np0005480824 podman[211470]: 2025-10-11 03:34:41.155076678 +0000 UTC m=+1.295986285 container remove de1346da6f97d58a88eaba3e88588103b7830d631c8131637fc9d363f0dd9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_visvesvaraya, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:34:41 np0005480824 systemd[1]: libpod-conmon-de1346da6f97d58a88eaba3e88588103b7830d631c8131637fc9d363f0dd9598.scope: Deactivated successfully.
Oct 10 23:34:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:34:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:34:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:34:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:34:41 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 7e53ec5e-6a8a-437b-befe-d0ebc0fd5459 does not exist
Oct 10 23:34:41 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 0f06f9f7-14b5-4864-b19c-8f080789f25f does not exist
Oct 10 23:34:41 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:34:41 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:34:41 np0005480824 python3.9[212003]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:42 np0005480824 python3.9[212126]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153681.2098927-775-202797147017370/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:43 np0005480824 podman[212250]: 2025-10-11 03:34:43.033697308 +0000 UTC m=+0.095869505 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:34:43 np0005480824 python3.9[212292]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:34:43 np0005480824 python3.9[212421]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153682.6280622-775-62694021663519/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:34:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:44 np0005480824 python3.9[212571]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:34:45 np0005480824 python3.9[212726]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 10 23:34:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:47 np0005480824 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 10 23:34:47 np0005480824 python3.9[212882]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:47 np0005480824 python3.9[213034]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:48 np0005480824 python3.9[213186]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:34:49 np0005480824 python3.9[213338]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:49 np0005480824 python3.9[213490]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:50 np0005480824 python3.9[213642]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:51 np0005480824 python3.9[213794]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:51 np0005480824 python3.9[213946]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:52 np0005480824 python3.9[214098]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:53 np0005480824 python3.9[214250]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:34:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:34:54 np0005480824 python3.9[214402]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:34:54 np0005480824 systemd[1]: Reloading.
Oct 10 23:34:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:54 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:34:54 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:34:54 np0005480824 systemd[1]: Starting libvirt logging daemon socket...
Oct 10 23:34:54 np0005480824 systemd[1]: Listening on libvirt logging daemon socket.
Oct 10 23:34:54 np0005480824 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 10 23:34:54 np0005480824 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 10 23:34:54 np0005480824 systemd[1]: Starting libvirt logging daemon...
Oct 10 23:34:54 np0005480824 systemd[1]: Started libvirt logging daemon.
Oct 10 23:34:55 np0005480824 python3.9[214596]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:34:55 np0005480824 systemd[1]: Reloading.
Oct 10 23:34:55 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:34:55 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:34:55 np0005480824 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 10 23:34:55 np0005480824 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 10 23:34:55 np0005480824 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 10 23:34:55 np0005480824 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 10 23:34:55 np0005480824 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 10 23:34:55 np0005480824 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 10 23:34:55 np0005480824 systemd[1]: Starting libvirt nodedev daemon...
Oct 10 23:34:56 np0005480824 systemd[1]: Started libvirt nodedev daemon.
Oct 10 23:34:56 np0005480824 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 10 23:34:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:56 np0005480824 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 10 23:34:56 np0005480824 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 10 23:34:56 np0005480824 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 10 23:34:56 np0005480824 python3.9[214819]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:34:56 np0005480824 systemd[1]: Reloading.
Oct 10 23:34:56 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:34:56 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:34:57 np0005480824 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 10 23:34:57 np0005480824 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 10 23:34:57 np0005480824 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 10 23:34:57 np0005480824 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 10 23:34:57 np0005480824 systemd[1]: Starting libvirt proxy daemon...
Oct 10 23:34:57 np0005480824 systemd[1]: Started libvirt proxy daemon.
Oct 10 23:34:57 np0005480824 setroubleshoot[214658]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 12fc0b4e-8fad-483e-8f18-22e647dadaca
Oct 10 23:34:57 np0005480824 setroubleshoot[214658]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct 10 23:34:57 np0005480824 setroubleshoot[214658]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 12fc0b4e-8fad-483e-8f18-22e647dadaca
Oct 10 23:34:57 np0005480824 setroubleshoot[214658]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct 10 23:34:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:34:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:34:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:34:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:34:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:34:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:34:58 np0005480824 python3.9[215031]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:34:58 np0005480824 systemd[1]: Reloading.
Oct 10 23:34:58 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:34:58 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:34:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:34:58 np0005480824 systemd[1]: Listening on libvirt locking daemon socket.
Oct 10 23:34:58 np0005480824 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 10 23:34:58 np0005480824 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 10 23:34:58 np0005480824 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 10 23:34:58 np0005480824 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 10 23:34:58 np0005480824 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 10 23:34:58 np0005480824 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 10 23:34:58 np0005480824 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 10 23:34:58 np0005480824 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 10 23:34:58 np0005480824 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 10 23:34:58 np0005480824 systemd[1]: Starting libvirt QEMU daemon...
Oct 10 23:34:58 np0005480824 systemd[1]: Started libvirt QEMU daemon.
Oct 10 23:34:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:34:59 np0005480824 python3.9[215244]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:34:59 np0005480824 systemd[1]: Reloading.
Oct 10 23:34:59 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:34:59 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:34:59 np0005480824 systemd[1]: Starting libvirt secret daemon socket...
Oct 10 23:34:59 np0005480824 systemd[1]: Listening on libvirt secret daemon socket.
Oct 10 23:34:59 np0005480824 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 10 23:34:59 np0005480824 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 10 23:34:59 np0005480824 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 10 23:34:59 np0005480824 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 10 23:34:59 np0005480824 systemd[1]: Starting libvirt secret daemon...
Oct 10 23:34:59 np0005480824 systemd[1]: Started libvirt secret daemon.
Oct 10 23:35:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:00 np0005480824 python3.9[215454]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:01 np0005480824 python3.9[215606]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 23:35:02 np0005480824 python3.9[215758]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:35:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:02 np0005480824 python3.9[215912]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 23:35:03 np0005480824 python3.9[216062]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:35:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:35:04 np0005480824 python3.9[216183]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153703.118434-1133-146480987090071/.source.xml follow=False _original_basename=secret.xml.j2 checksum=4f4e6b5441adda5281fbd72dfd731e6192618b0e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:04 np0005480824 python3.9[216335]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 92cfe4d4-4917-5be1-9d00-73758793a62b#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:35:05 np0005480824 python3.9[216497]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:07 np0005480824 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 10 23:35:07 np0005480824 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 10 23:35:08 np0005480824 python3.9[216960]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:08 np0005480824 podman[217084]: 2025-10-11 03:35:08.792111185 +0000 UTC m=+0.147583021 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 10 23:35:08 np0005480824 python3.9[217128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:35:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:35:09 np0005480824 python3.9[217260]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153708.3453922-1188-22623476891592/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:10 np0005480824 python3.9[217412]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:35:10.471 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:35:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:35:10.472 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:35:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:35:10.472 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:35:11 np0005480824 python3.9[217564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:35:11 np0005480824 python3.9[217642]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:12 np0005480824 python3.9[217794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:35:13 np0005480824 python3.9[217872]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.a3kroasl recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:13 np0005480824 podman[217996]: 2025-10-11 03:35:13.850961089 +0000 UTC m=+0.059600495 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 10 23:35:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:35:14 np0005480824 python3.9[218043]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:35:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:14 np0005480824 python3.9[218121]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:15 np0005480824 python3.9[218273]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:35:16 np0005480824 python3[218426]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 23:35:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:16 np0005480824 python3.9[218578]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:35:17 np0005480824 python3.9[218656]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:18 np0005480824 python3.9[218808]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:35:18 np0005480824 python3.9[218886]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:35:19 np0005480824 python3.9[219038]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:35:20 np0005480824 python3.9[219116]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:20 np0005480824 python3.9[219268]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:35:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:22 np0005480824 python3.9[219346]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:23 np0005480824 python3.9[219498]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:35:23 np0005480824 python3.9[219623]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760153722.4598196-1313-95717356798375/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:35:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:24 np0005480824 python3.9[219775]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:25 np0005480824 python3.9[219927]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:35:25 np0005480824 python3.9[220082]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:26 np0005480824 python3.9[220234]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:35:27 np0005480824 python3.9[220387]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:35:27
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:35:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:35:28 np0005480824 python3.9[220541]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:35:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:28 np0005480824 python3.9[220696]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:35:29 np0005480824 python3.9[220848]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:35:29 np0005480824 python3.9[220971]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153728.915867-1385-111002799805605/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:30 np0005480824 python3.9[221123]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:35:31 np0005480824 python3.9[221246]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153730.1147046-1400-166614914575148/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:31 np0005480824 python3.9[221398]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:35:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:32 np0005480824 python3.9[221521]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153731.2399132-1415-8425469943381/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:33 np0005480824 python3.9[221673]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:35:33 np0005480824 systemd[1]: Reloading.
Oct 10 23:35:33 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:35:33 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:35:33 np0005480824 systemd[1]: Reached target edpm_libvirt.target.
Oct 10 23:35:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:35:34 np0005480824 python3.9[221864]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 10 23:35:34 np0005480824 systemd[1]: Reloading.
Oct 10 23:35:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:34 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:35:34 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:35:34 np0005480824 systemd[1]: Reloading.
Oct 10 23:35:34 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:35:34 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:35:35 np0005480824 systemd[1]: session-49.scope: Deactivated successfully.
Oct 10 23:35:35 np0005480824 systemd[1]: session-49.scope: Consumed 3min 35.658s CPU time.
Oct 10 23:35:35 np0005480824 systemd-logind[782]: Session 49 logged out. Waiting for processes to exit.
Oct 10 23:35:35 np0005480824 systemd-logind[782]: Removed session 49.
Oct 10 23:35:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:35:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:35:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:35:39 np0005480824 podman[221961]: 2025-10-11 03:35:39.024648965 +0000 UTC m=+0.087786312 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 10 23:35:40 np0005480824 systemd-logind[782]: New session 50 of user zuul.
Oct 10 23:35:40 np0005480824 systemd[1]: Started Session 50 of User zuul.
Oct 10 23:35:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:41 np0005480824 python3.9[222141]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:35:42 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 8bd0f074-6f4e-4ec7-9181-f334f51ed81b does not exist
Oct 10 23:35:42 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 4b69943b-a72c-4eff-8ef0-a32b89c03ab2 does not exist
Oct 10 23:35:42 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev cc7b8f3d-e927-4c01-bd22-d6fc3db733a5 does not exist
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:35:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:35:42 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:35:42 np0005480824 python3.9[222474]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:35:42 np0005480824 podman[222646]: 2025-10-11 03:35:42.693956288 +0000 UTC m=+0.036843924 container create de4ee4443d591edb95cebc124554e155ff6c449b6f552a995c678a5c63f56857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:35:42 np0005480824 systemd[1]: Started libpod-conmon-de4ee4443d591edb95cebc124554e155ff6c449b6f552a995c678a5c63f56857.scope.
Oct 10 23:35:42 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:35:42 np0005480824 podman[222646]: 2025-10-11 03:35:42.677480857 +0000 UTC m=+0.020368523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:35:42 np0005480824 podman[222646]: 2025-10-11 03:35:42.774235032 +0000 UTC m=+0.117122688 container init de4ee4443d591edb95cebc124554e155ff6c449b6f552a995c678a5c63f56857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dubinsky, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 23:35:42 np0005480824 podman[222646]: 2025-10-11 03:35:42.78636287 +0000 UTC m=+0.129250506 container start de4ee4443d591edb95cebc124554e155ff6c449b6f552a995c678a5c63f56857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:35:42 np0005480824 podman[222646]: 2025-10-11 03:35:42.789737449 +0000 UTC m=+0.132625085 container attach de4ee4443d591edb95cebc124554e155ff6c449b6f552a995c678a5c63f56857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dubinsky, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:35:42 np0005480824 dreamy_dubinsky[222686]: 167 167
Oct 10 23:35:42 np0005480824 systemd[1]: libpod-de4ee4443d591edb95cebc124554e155ff6c449b6f552a995c678a5c63f56857.scope: Deactivated successfully.
Oct 10 23:35:42 np0005480824 podman[222646]: 2025-10-11 03:35:42.792239759 +0000 UTC m=+0.135127405 container died de4ee4443d591edb95cebc124554e155ff6c449b6f552a995c678a5c63f56857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 23:35:42 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a79181d6276a57a81758121ab1e90bfe570603e2f81314dc0e5816260b146b16-merged.mount: Deactivated successfully.
Oct 10 23:35:42 np0005480824 podman[222646]: 2025-10-11 03:35:42.850213124 +0000 UTC m=+0.193100760 container remove de4ee4443d591edb95cebc124554e155ff6c449b6f552a995c678a5c63f56857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:35:42 np0005480824 systemd[1]: libpod-conmon-de4ee4443d591edb95cebc124554e155ff6c449b6f552a995c678a5c63f56857.scope: Deactivated successfully.
Oct 10 23:35:43 np0005480824 podman[222765]: 2025-10-11 03:35:43.042037533 +0000 UTC m=+0.062497843 container create 0abe74f59b903cd00e972ed18b35f5b73b98c4b2a2d14dcc246759f4d356beaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 10 23:35:43 np0005480824 python3.9[222759]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:35:43 np0005480824 systemd[1]: Started libpod-conmon-0abe74f59b903cd00e972ed18b35f5b73b98c4b2a2d14dcc246759f4d356beaa.scope.
Oct 10 23:35:43 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:35:43 np0005480824 podman[222765]: 2025-10-11 03:35:43.013308061 +0000 UTC m=+0.033768421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:35:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d431cd9754636ed507a65be1b8981960cefe9d730a7751e1a78c729d9b45546/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:35:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d431cd9754636ed507a65be1b8981960cefe9d730a7751e1a78c729d9b45546/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:35:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d431cd9754636ed507a65be1b8981960cefe9d730a7751e1a78c729d9b45546/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:35:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d431cd9754636ed507a65be1b8981960cefe9d730a7751e1a78c729d9b45546/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:35:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d431cd9754636ed507a65be1b8981960cefe9d730a7751e1a78c729d9b45546/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:35:43 np0005480824 podman[222765]: 2025-10-11 03:35:43.132383455 +0000 UTC m=+0.152843745 container init 0abe74f59b903cd00e972ed18b35f5b73b98c4b2a2d14dcc246759f4d356beaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 23:35:43 np0005480824 podman[222765]: 2025-10-11 03:35:43.139017782 +0000 UTC m=+0.159478062 container start 0abe74f59b903cd00e972ed18b35f5b73b98c4b2a2d14dcc246759f4d356beaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:35:43 np0005480824 podman[222765]: 2025-10-11 03:35:43.153838604 +0000 UTC m=+0.174298894 container attach 0abe74f59b903cd00e972ed18b35f5b73b98c4b2a2d14dcc246759f4d356beaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:35:43 np0005480824 python3.9[222938]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:35:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:35:44 np0005480824 podman[223022]: 2025-10-11 03:35:44.016822408 +0000 UTC m=+0.068042814 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 10 23:35:44 np0005480824 nice_joliot[222782]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:35:44 np0005480824 nice_joliot[222782]: --> relative data size: 1.0
Oct 10 23:35:44 np0005480824 nice_joliot[222782]: --> All data devices are unavailable
Oct 10 23:35:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:44 np0005480824 systemd[1]: libpod-0abe74f59b903cd00e972ed18b35f5b73b98c4b2a2d14dcc246759f4d356beaa.scope: Deactivated successfully.
Oct 10 23:35:44 np0005480824 podman[222765]: 2025-10-11 03:35:44.215896649 +0000 UTC m=+1.236356939 container died 0abe74f59b903cd00e972ed18b35f5b73b98c4b2a2d14dcc246759f4d356beaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 10 23:35:44 np0005480824 systemd[1]: var-lib-containers-storage-overlay-8d431cd9754636ed507a65be1b8981960cefe9d730a7751e1a78c729d9b45546-merged.mount: Deactivated successfully.
Oct 10 23:35:44 np0005480824 podman[222765]: 2025-10-11 03:35:44.269671114 +0000 UTC m=+1.290131384 container remove 0abe74f59b903cd00e972ed18b35f5b73b98c4b2a2d14dcc246759f4d356beaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:35:44 np0005480824 systemd[1]: libpod-conmon-0abe74f59b903cd00e972ed18b35f5b73b98c4b2a2d14dcc246759f4d356beaa.scope: Deactivated successfully.
Oct 10 23:35:44 np0005480824 python3.9[223129]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 23:35:44 np0005480824 podman[223440]: 2025-10-11 03:35:44.846852912 +0000 UTC m=+0.040625725 container create 52526a8e6ab096b0664db5714c3eb5e1776e1d30d4630729238c7b3c74e1b6f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:35:44 np0005480824 systemd[1]: Started libpod-conmon-52526a8e6ab096b0664db5714c3eb5e1776e1d30d4630729238c7b3c74e1b6f6.scope.
Oct 10 23:35:44 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:35:44 np0005480824 podman[223440]: 2025-10-11 03:35:44.908922323 +0000 UTC m=+0.102695156 container init 52526a8e6ab096b0664db5714c3eb5e1776e1d30d4630729238c7b3c74e1b6f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 23:35:44 np0005480824 podman[223440]: 2025-10-11 03:35:44.918376398 +0000 UTC m=+0.112149211 container start 52526a8e6ab096b0664db5714c3eb5e1776e1d30d4630729238c7b3c74e1b6f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wilson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 10 23:35:44 np0005480824 podman[223440]: 2025-10-11 03:35:44.921781148 +0000 UTC m=+0.115553981 container attach 52526a8e6ab096b0664db5714c3eb5e1776e1d30d4630729238c7b3c74e1b6f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wilson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:35:44 np0005480824 inspiring_wilson[223457]: 167 167
Oct 10 23:35:44 np0005480824 systemd[1]: libpod-52526a8e6ab096b0664db5714c3eb5e1776e1d30d4630729238c7b3c74e1b6f6.scope: Deactivated successfully.
Oct 10 23:35:44 np0005480824 podman[223440]: 2025-10-11 03:35:44.92563854 +0000 UTC m=+0.119411393 container died 52526a8e6ab096b0664db5714c3eb5e1776e1d30d4630729238c7b3c74e1b6f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wilson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 10 23:35:44 np0005480824 podman[223440]: 2025-10-11 03:35:44.830278798 +0000 UTC m=+0.024051631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:35:44 np0005480824 systemd[1]: var-lib-containers-storage-overlay-296efdbe781628119e1d4911ad94cfafdd5f455b487f18727b07a9de831514b8-merged.mount: Deactivated successfully.
Oct 10 23:35:44 np0005480824 python3.9[223439]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:35:44 np0005480824 podman[223440]: 2025-10-11 03:35:44.985374316 +0000 UTC m=+0.179147169 container remove 52526a8e6ab096b0664db5714c3eb5e1776e1d30d4630729238c7b3c74e1b6f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:35:45 np0005480824 systemd[1]: libpod-conmon-52526a8e6ab096b0664db5714c3eb5e1776e1d30d4630729238c7b3c74e1b6f6.scope: Deactivated successfully.
Oct 10 23:35:45 np0005480824 podman[223508]: 2025-10-11 03:35:45.189118937 +0000 UTC m=+0.039809264 container create 31025c436bf85869ba7f2ef0e2483a9d867155191835086ad013179ca35a9547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hawking, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:35:45 np0005480824 systemd[1]: Started libpod-conmon-31025c436bf85869ba7f2ef0e2483a9d867155191835086ad013179ca35a9547.scope.
Oct 10 23:35:45 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:35:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465ecdea4cf069d50a61f6296010c90a5a3641d545dce39f00f997f7e46a5c00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:35:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465ecdea4cf069d50a61f6296010c90a5a3641d545dce39f00f997f7e46a5c00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:35:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465ecdea4cf069d50a61f6296010c90a5a3641d545dce39f00f997f7e46a5c00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:35:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465ecdea4cf069d50a61f6296010c90a5a3641d545dce39f00f997f7e46a5c00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:35:45 np0005480824 podman[223508]: 2025-10-11 03:35:45.173370564 +0000 UTC m=+0.024060901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:35:45 np0005480824 podman[223508]: 2025-10-11 03:35:45.277961085 +0000 UTC m=+0.128651412 container init 31025c436bf85869ba7f2ef0e2483a9d867155191835086ad013179ca35a9547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:35:45 np0005480824 podman[223508]: 2025-10-11 03:35:45.292425157 +0000 UTC m=+0.143115484 container start 31025c436bf85869ba7f2ef0e2483a9d867155191835086ad013179ca35a9547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hawking, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 10 23:35:45 np0005480824 podman[223508]: 2025-10-11 03:35:45.300415087 +0000 UTC m=+0.151105414 container attach 31025c436bf85869ba7f2ef0e2483a9d867155191835086ad013179ca35a9547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hawking, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:35:45 np0005480824 python3.9[223657]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]: {
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:    "0": [
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:        {
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "devices": [
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "/dev/loop3"
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            ],
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_name": "ceph_lv0",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_size": "21470642176",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "name": "ceph_lv0",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "tags": {
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.cluster_name": "ceph",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.crush_device_class": "",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.encrypted": "0",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.osd_id": "0",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.type": "block",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.vdo": "0"
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            },
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "type": "block",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "vg_name": "ceph_vg0"
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:        }
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:    ],
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:    "1": [
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:        {
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "devices": [
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "/dev/loop4"
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            ],
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_name": "ceph_lv1",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_size": "21470642176",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "name": "ceph_lv1",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "tags": {
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.cluster_name": "ceph",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.crush_device_class": "",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.encrypted": "0",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.osd_id": "1",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.type": "block",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.vdo": "0"
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            },
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "type": "block",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "vg_name": "ceph_vg1"
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:        }
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:    ],
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:    "2": [
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:        {
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "devices": [
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "/dev/loop5"
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            ],
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_name": "ceph_lv2",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_size": "21470642176",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "name": "ceph_lv2",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "tags": {
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.cluster_name": "ceph",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.crush_device_class": "",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.encrypted": "0",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.osd_id": "2",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.type": "block",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:                "ceph.vdo": "0"
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            },
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "type": "block",
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:            "vg_name": "ceph_vg2"
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:        }
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]:    ]
Oct 10 23:35:46 np0005480824 gracious_hawking[223555]: }
Oct 10 23:35:46 np0005480824 systemd[1]: libpod-31025c436bf85869ba7f2ef0e2483a9d867155191835086ad013179ca35a9547.scope: Deactivated successfully.
Oct 10 23:35:46 np0005480824 podman[223508]: 2025-10-11 03:35:46.075737392 +0000 UTC m=+0.926427729 container died 31025c436bf85869ba7f2ef0e2483a9d867155191835086ad013179ca35a9547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 23:35:46 np0005480824 systemd[1]: var-lib-containers-storage-overlay-465ecdea4cf069d50a61f6296010c90a5a3641d545dce39f00f997f7e46a5c00-merged.mount: Deactivated successfully.
Oct 10 23:35:46 np0005480824 podman[223508]: 2025-10-11 03:35:46.135693194 +0000 UTC m=+0.986383521 container remove 31025c436bf85869ba7f2ef0e2483a9d867155191835086ad013179ca35a9547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 10 23:35:46 np0005480824 systemd[1]: libpod-conmon-31025c436bf85869ba7f2ef0e2483a9d867155191835086ad013179ca35a9547.scope: Deactivated successfully.
Oct 10 23:35:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:46 np0005480824 podman[223938]: 2025-10-11 03:35:46.685762848 +0000 UTC m=+0.039277022 container create 536e9c2d365b25558474a635cc97082187822fd0d15fb46978784c1abd12808e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khorana, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:35:46 np0005480824 systemd[1]: Started libpod-conmon-536e9c2d365b25558474a635cc97082187822fd0d15fb46978784c1abd12808e.scope.
Oct 10 23:35:46 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:35:46 np0005480824 podman[223938]: 2025-10-11 03:35:46.753573537 +0000 UTC m=+0.107087741 container init 536e9c2d365b25558474a635cc97082187822fd0d15fb46978784c1abd12808e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 23:35:46 np0005480824 podman[223938]: 2025-10-11 03:35:46.760152652 +0000 UTC m=+0.113666826 container start 536e9c2d365b25558474a635cc97082187822fd0d15fb46978784c1abd12808e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khorana, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct 10 23:35:46 np0005480824 competent_khorana[223982]: 167 167
Oct 10 23:35:46 np0005480824 podman[223938]: 2025-10-11 03:35:46.763254196 +0000 UTC m=+0.116768390 container attach 536e9c2d365b25558474a635cc97082187822fd0d15fb46978784c1abd12808e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khorana, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 10 23:35:46 np0005480824 systemd[1]: libpod-536e9c2d365b25558474a635cc97082187822fd0d15fb46978784c1abd12808e.scope: Deactivated successfully.
Oct 10 23:35:46 np0005480824 podman[223938]: 2025-10-11 03:35:46.764224209 +0000 UTC m=+0.117738373 container died 536e9c2d365b25558474a635cc97082187822fd0d15fb46978784c1abd12808e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khorana, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:35:46 np0005480824 podman[223938]: 2025-10-11 03:35:46.669715158 +0000 UTC m=+0.023229352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:35:46 np0005480824 systemd[1]: var-lib-containers-storage-overlay-f006ec7e8072b3d0326f933c933225619c5e72da9edae42eb926aed99aee7d20-merged.mount: Deactivated successfully.
Oct 10 23:35:46 np0005480824 podman[223938]: 2025-10-11 03:35:46.814375258 +0000 UTC m=+0.167889432 container remove 536e9c2d365b25558474a635cc97082187822fd0d15fb46978784c1abd12808e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khorana, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 10 23:35:46 np0005480824 systemd[1]: libpod-conmon-536e9c2d365b25558474a635cc97082187822fd0d15fb46978784c1abd12808e.scope: Deactivated successfully.
Oct 10 23:35:46 np0005480824 podman[224008]: 2025-10-11 03:35:46.958318312 +0000 UTC m=+0.042981381 container create 200246b6f2e5fa078c7f8a4c86f446a2001c8870cd07c43973ba207cd8485173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 10 23:35:46 np0005480824 systemd[1]: Started libpod-conmon-200246b6f2e5fa078c7f8a4c86f446a2001c8870cd07c43973ba207cd8485173.scope.
Oct 10 23:35:46 np0005480824 python3.9[223979]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:35:47 np0005480824 podman[224008]: 2025-10-11 03:35:46.934220271 +0000 UTC m=+0.018883360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:35:47 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:35:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c575651829ecf2ac6ce6ab90b308c03b84923f77b6e8cd17d0a6aa499c2ffbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:35:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c575651829ecf2ac6ce6ab90b308c03b84923f77b6e8cd17d0a6aa499c2ffbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:35:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c575651829ecf2ac6ce6ab90b308c03b84923f77b6e8cd17d0a6aa499c2ffbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:35:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c575651829ecf2ac6ce6ab90b308c03b84923f77b6e8cd17d0a6aa499c2ffbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:35:47 np0005480824 podman[224008]: 2025-10-11 03:35:47.055519236 +0000 UTC m=+0.140182305 container init 200246b6f2e5fa078c7f8a4c86f446a2001c8870cd07c43973ba207cd8485173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:35:47 np0005480824 podman[224008]: 2025-10-11 03:35:47.061669703 +0000 UTC m=+0.146332772 container start 200246b6f2e5fa078c7f8a4c86f446a2001c8870cd07c43973ba207cd8485173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 10 23:35:47 np0005480824 podman[224008]: 2025-10-11 03:35:47.064687784 +0000 UTC m=+0.149350853 container attach 200246b6f2e5fa078c7f8a4c86f446a2001c8870cd07c43973ba207cd8485173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 23:35:47 np0005480824 systemd[1]: Reloading.
Oct 10 23:35:47 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:35:47 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:35:47 np0005480824 serene_feistel[224025]: {
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "osd_id": 0,
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "type": "bluestore"
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:    },
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "osd_id": 1,
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "type": "bluestore"
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:    },
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "osd_id": 2,
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:        "type": "bluestore"
Oct 10 23:35:47 np0005480824 serene_feistel[224025]:    }
Oct 10 23:35:47 np0005480824 serene_feistel[224025]: }
Oct 10 23:35:48 np0005480824 systemd[1]: libpod-200246b6f2e5fa078c7f8a4c86f446a2001c8870cd07c43973ba207cd8485173.scope: Deactivated successfully.
Oct 10 23:35:48 np0005480824 podman[224008]: 2025-10-11 03:35:48.002982165 +0000 UTC m=+1.087645244 container died 200246b6f2e5fa078c7f8a4c86f446a2001c8870cd07c43973ba207cd8485173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feistel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:35:48 np0005480824 systemd[1]: var-lib-containers-storage-overlay-1c575651829ecf2ac6ce6ab90b308c03b84923f77b6e8cd17d0a6aa499c2ffbe-merged.mount: Deactivated successfully.
Oct 10 23:35:48 np0005480824 podman[224008]: 2025-10-11 03:35:48.071809496 +0000 UTC m=+1.156472565 container remove 200246b6f2e5fa078c7f8a4c86f446a2001c8870cd07c43973ba207cd8485173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_feistel, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:35:48 np0005480824 systemd[1]: libpod-conmon-200246b6f2e5fa078c7f8a4c86f446a2001c8870cd07c43973ba207cd8485173.scope: Deactivated successfully.
Oct 10 23:35:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:35:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:35:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:35:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:35:48 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 887826c3-2fb8-434b-9a57-1e8eb3558294 does not exist
Oct 10 23:35:48 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 2f44d57d-0dd3-4f4d-8e4e-9fb718e90ae5 does not exist
Oct 10 23:35:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:48 np0005480824 python3.9[224260]: ansible-ansible.builtin.service_facts Invoked
Oct 10 23:35:48 np0005480824 network[224327]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 23:35:48 np0005480824 network[224328]: 'network-scripts' will be removed from distribution in near future.
Oct 10 23:35:48 np0005480824 network[224329]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 23:35:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:35:49 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:35:49 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:35:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:53 np0005480824 python3.9[224603]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:35:53 np0005480824 systemd[1]: Reloading.
Oct 10 23:35:53 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:35:53 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:35:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:35:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:54 np0005480824 python3.9[224790]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:35:55 np0005480824 python3.9[224942]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 10 23:35:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:56 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 23:35:56 np0005480824 podman[224955]: 2025-10-11 03:35:56.855308414 +0000 UTC m=+1.233645004 image pull 5773abc4300b61c01f3353a0b9239f9a404bb272790b280574e4c56f72edaa72 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 10 23:35:57 np0005480824 podman[225014]: 2025-10-11 03:35:57.009890291 +0000 UTC m=+0.040490812 container create ff3f506a240ba64cb18f3dbcc06be70153f38ff19d5fdc689a0b145b3cd6ed48 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.0323] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/21)
Oct 10 23:35:57 np0005480824 kernel: podman0: port 1(veth0) entered blocking state
Oct 10 23:35:57 np0005480824 kernel: podman0: port 1(veth0) entered disabled state
Oct 10 23:35:57 np0005480824 kernel: veth0: entered allmulticast mode
Oct 10 23:35:57 np0005480824 kernel: veth0: entered promiscuous mode
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.0451] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Oct 10 23:35:57 np0005480824 kernel: podman0: port 1(veth0) entered blocking state
Oct 10 23:35:57 np0005480824 kernel: podman0: port 1(veth0) entered forwarding state
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.0480] device (veth0): carrier: link connected
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.0483] device (podman0): carrier: link connected
Oct 10 23:35:57 np0005480824 systemd-udevd[225049]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:35:57 np0005480824 systemd-udevd[225051]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:35:57 np0005480824 podman[225014]: 2025-10-11 03:35:56.992571279 +0000 UTC m=+0.023171820 image pull 5773abc4300b61c01f3353a0b9239f9a404bb272790b280574e4c56f72edaa72 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.0922] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.0930] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.0939] device (podman0): Activation: starting connection 'podman0' (25af5214-a561-4d52-acfe-2cef9239d9cb)
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.0940] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.0943] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.0946] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.0948] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 23:35:57 np0005480824 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 23:35:57 np0005480824 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.1224] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.1226] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.1233] device (podman0): Activation: successful, device activated.
Oct 10 23:35:57 np0005480824 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 10 23:35:57 np0005480824 systemd[1]: Started libpod-conmon-ff3f506a240ba64cb18f3dbcc06be70153f38ff19d5fdc689a0b145b3cd6ed48.scope.
Oct 10 23:35:57 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:35:57 np0005480824 podman[225014]: 2025-10-11 03:35:57.352521945 +0000 UTC m=+0.383122486 container init ff3f506a240ba64cb18f3dbcc06be70153f38ff19d5fdc689a0b145b3cd6ed48 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 23:35:57 np0005480824 podman[225014]: 2025-10-11 03:35:57.359409058 +0000 UTC m=+0.390009579 container start ff3f506a240ba64cb18f3dbcc06be70153f38ff19d5fdc689a0b145b3cd6ed48 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 23:35:57 np0005480824 podman[225014]: 2025-10-11 03:35:57.363085856 +0000 UTC m=+0.393686387 container attach ff3f506a240ba64cb18f3dbcc06be70153f38ff19d5fdc689a0b145b3cd6ed48 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0)
Oct 10 23:35:57 np0005480824 iscsid_config[225172]: iqn.1994-05.com.redhat:d5d671ddab5a#015
Oct 10 23:35:57 np0005480824 systemd[1]: libpod-ff3f506a240ba64cb18f3dbcc06be70153f38ff19d5fdc689a0b145b3cd6ed48.scope: Deactivated successfully.
Oct 10 23:35:57 np0005480824 podman[225014]: 2025-10-11 03:35:57.365149524 +0000 UTC m=+0.395750045 container died ff3f506a240ba64cb18f3dbcc06be70153f38ff19d5fdc689a0b145b3cd6ed48 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:35:57 np0005480824 kernel: podman0: port 1(veth0) entered disabled state
Oct 10 23:35:57 np0005480824 kernel: veth0 (unregistering): left allmulticast mode
Oct 10 23:35:57 np0005480824 kernel: veth0 (unregistering): left promiscuous mode
Oct 10 23:35:57 np0005480824 kernel: podman0: port 1(veth0) entered disabled state
Oct 10 23:35:57 np0005480824 NetworkManager[44969]: <info>  [1760153757.4357] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:35:57 np0005480824 systemd[1]: run-netns-netns\x2d3883a286\x2de24a\x2d2ccd\x2dfdea\x2d88c3207113f9.mount: Deactivated successfully.
Oct 10 23:35:57 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff3f506a240ba64cb18f3dbcc06be70153f38ff19d5fdc689a0b145b3cd6ed48-userdata-shm.mount: Deactivated successfully.
Oct 10 23:35:57 np0005480824 systemd[1]: var-lib-containers-storage-overlay-363f926c2e5a0555f049a4884341246c802f6bafad4b8ab4e13e5759b4cb7983-merged.mount: Deactivated successfully.
Oct 10 23:35:57 np0005480824 podman[225014]: 2025-10-11 03:35:57.779197073 +0000 UTC m=+0.809797634 container remove ff3f506a240ba64cb18f3dbcc06be70153f38ff19d5fdc689a0b145b3cd6ed48 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 10 23:35:57 np0005480824 python3.9[224942]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct 10 23:35:57 np0005480824 systemd[1]: libpod-conmon-ff3f506a240ba64cb18f3dbcc06be70153f38ff19d5fdc689a0b145b3cd6ed48.scope: Deactivated successfully.
Oct 10 23:35:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:35:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:35:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:35:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:35:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:35:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:35:57 np0005480824 python3.9[224942]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 10 23:35:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:35:58 np0005480824 python3.9[225414]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:35:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:35:59 np0005480824 python3.9[225537]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153758.110895-119-242426144976964/.source.iscsi _original_basename=.ao29vawb follow=False checksum=06727d9308530475d46f5279034a92c3433c0f68 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:35:59 np0005480824 python3.9[225689]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:00 np0005480824 python3.9[225839]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:36:01 np0005480824 python3.9[225993]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:02 np0005480824 python3.9[226147]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:36:03 np0005480824 python3.9[226299]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:36:03 np0005480824 python3.9[226377]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:36:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:36:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:04 np0005480824 python3.9[226529]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:36:04 np0005480824 python3.9[226607]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:36:05 np0005480824 python3.9[226759]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:06 np0005480824 python3.9[226911]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:36:06 np0005480824 python3.9[226989]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:07 np0005480824 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 23:36:07 np0005480824 python3.9[227141]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:36:08 np0005480824 python3.9[227219]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:09 np0005480824 python3.9[227371]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:36:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:36:09 np0005480824 systemd[1]: Reloading.
Oct 10 23:36:09 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:36:09 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:36:09 np0005480824 podman[227409]: 2025-10-11 03:36:09.495017718 +0000 UTC m=+0.113477732 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 10 23:36:10 np0005480824 python3.9[227587]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:36:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:36:10.472 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:36:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:36:10.472 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:36:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:36:10.472 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:36:10 np0005480824 python3.9[227665]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:11 np0005480824 python3.9[227817]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:36:11 np0005480824 python3.9[227895]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:12 np0005480824 python3.9[228047]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:36:12 np0005480824 systemd[1]: Reloading.
Oct 10 23:36:13 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:36:13 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:36:13 np0005480824 systemd[1]: Starting Create netns directory...
Oct 10 23:36:13 np0005480824 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 23:36:13 np0005480824 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 23:36:13 np0005480824 systemd[1]: Finished Create netns directory.
Oct 10 23:36:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:36:14 np0005480824 python3.9[228240]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:36:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:14 np0005480824 podman[228241]: 2025-10-11 03:36:14.293458526 +0000 UTC m=+0.060418253 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 23:36:14 np0005480824 python3.9[228411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:36:15 np0005480824 python3.9[228534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153774.430931-273-103683712295216/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:36:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:16 np0005480824 python3.9[228686]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:36:17 np0005480824 python3.9[228838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:36:17 np0005480824 python3.9[228961]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153776.643674-298-11436149450715/.source.json _original_basename=.m_hmc24z follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:18 np0005480824 python3.9[229113]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:36:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:21 np0005480824 python3.9[229540]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 10 23:36:22 np0005480824 python3.9[229692]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 23:36:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:23 np0005480824 python3.9[229845]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 23:36:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:36:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:24 np0005480824 python3[230024]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 23:36:25 np0005480824 podman[230061]: 2025-10-11 03:36:25.027560229 +0000 UTC m=+0.049617497 container create f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 10 23:36:25 np0005480824 podman[230061]: 2025-10-11 03:36:24.998249434 +0000 UTC m=+0.020306722 image pull 5773abc4300b61c01f3353a0b9239f9a404bb272790b280574e4c56f72edaa72 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 10 23:36:25 np0005480824 python3[230024]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 10 23:36:25 np0005480824 python3.9[230251]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:36:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:26 np0005480824 python3.9[230405]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:27 np0005480824 python3.9[230481]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:36:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:36:27
Oct 10 23:36:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:36:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:36:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['volumes', 'backups', 'default.rgw.control', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta']
Oct 10 23:36:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:36:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:36:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:36:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:36:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:36:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:36:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:36:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:36:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:36:27 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:36:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:36:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:36:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:36:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:36:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:36:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:36:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:36:28 np0005480824 python3.9[230632]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760153787.3608966-386-251987012740863/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:28 np0005480824 python3.9[230708]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 23:36:28 np0005480824 systemd[1]: Reloading.
Oct 10 23:36:28 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:36:28 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:36:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:36:29 np0005480824 python3.9[230820]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:36:29 np0005480824 systemd[1]: Reloading.
Oct 10 23:36:29 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:36:29 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:36:30 np0005480824 systemd[1]: Starting iscsid container...
Oct 10 23:36:30 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:36:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acba10e39aee7f7b2979802f8bcc6e90e8eb3ccaf2f6e3ef02735f2c95ae27fc/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acba10e39aee7f7b2979802f8bcc6e90e8eb3ccaf2f6e3ef02735f2c95ae27fc/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acba10e39aee7f7b2979802f8bcc6e90e8eb3ccaf2f6e3ef02735f2c95ae27fc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:30 np0005480824 systemd[1]: Started /usr/bin/podman healthcheck run f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d.
Oct 10 23:36:30 np0005480824 podman[230861]: 2025-10-11 03:36:30.200158181 +0000 UTC m=+0.152407466 container init f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 10 23:36:30 np0005480824 iscsid[230877]: + sudo -E kolla_set_configs
Oct 10 23:36:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:30 np0005480824 podman[230861]: 2025-10-11 03:36:30.232069487 +0000 UTC m=+0.184318712 container start f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, io.buildah.version=1.41.3)
Oct 10 23:36:30 np0005480824 podman[230861]: iscsid
Oct 10 23:36:30 np0005480824 systemd[1]: Started iscsid container.
Oct 10 23:36:30 np0005480824 podman[230883]: 2025-10-11 03:36:30.326962847 +0000 UTC m=+0.085020746 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 23:36:30 np0005480824 systemd[1]: f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d-38b9144b98f6cb45.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 23:36:30 np0005480824 systemd[1]: f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d-38b9144b98f6cb45.service: Failed with result 'exit-code'.
Oct 10 23:36:30 np0005480824 systemd[1]: Created slice User Slice of UID 0.
Oct 10 23:36:30 np0005480824 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 10 23:36:30 np0005480824 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 10 23:36:30 np0005480824 systemd[1]: Starting User Manager for UID 0...
Oct 10 23:36:30 np0005480824 systemd[230931]: Queued start job for default target Main User Target.
Oct 10 23:36:30 np0005480824 systemd[230931]: Created slice User Application Slice.
Oct 10 23:36:30 np0005480824 systemd[230931]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 10 23:36:30 np0005480824 systemd[230931]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 23:36:30 np0005480824 systemd[230931]: Reached target Paths.
Oct 10 23:36:30 np0005480824 systemd[230931]: Reached target Timers.
Oct 10 23:36:30 np0005480824 systemd[230931]: Starting D-Bus User Message Bus Socket...
Oct 10 23:36:30 np0005480824 systemd[230931]: Starting Create User's Volatile Files and Directories...
Oct 10 23:36:30 np0005480824 systemd[230931]: Listening on D-Bus User Message Bus Socket.
Oct 10 23:36:30 np0005480824 systemd[230931]: Reached target Sockets.
Oct 10 23:36:30 np0005480824 systemd[230931]: Finished Create User's Volatile Files and Directories.
Oct 10 23:36:30 np0005480824 systemd[230931]: Reached target Basic System.
Oct 10 23:36:30 np0005480824 systemd[230931]: Reached target Main User Target.
Oct 10 23:36:30 np0005480824 systemd[230931]: Startup finished in 167ms.
Oct 10 23:36:30 np0005480824 systemd[1]: Started User Manager for UID 0.
Oct 10 23:36:30 np0005480824 systemd[1]: Started Session c3 of User root.
Oct 10 23:36:30 np0005480824 iscsid[230877]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 23:36:30 np0005480824 iscsid[230877]: INFO:__main__:Validating config file
Oct 10 23:36:30 np0005480824 iscsid[230877]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 23:36:30 np0005480824 iscsid[230877]: INFO:__main__:Writing out command to execute
Oct 10 23:36:30 np0005480824 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 10 23:36:30 np0005480824 iscsid[230877]: ++ cat /run_command
Oct 10 23:36:30 np0005480824 iscsid[230877]: + CMD='/usr/sbin/iscsid -f'
Oct 10 23:36:30 np0005480824 iscsid[230877]: + ARGS=
Oct 10 23:36:30 np0005480824 iscsid[230877]: + sudo kolla_copy_cacerts
Oct 10 23:36:30 np0005480824 systemd[1]: Started Session c4 of User root.
Oct 10 23:36:30 np0005480824 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 10 23:36:30 np0005480824 iscsid[230877]: + [[ ! -n '' ]]
Oct 10 23:36:30 np0005480824 iscsid[230877]: + . kolla_extend_start
Oct 10 23:36:30 np0005480824 iscsid[230877]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 10 23:36:30 np0005480824 iscsid[230877]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 10 23:36:30 np0005480824 iscsid[230877]: Running command: '/usr/sbin/iscsid -f'
Oct 10 23:36:30 np0005480824 iscsid[230877]: + umask 0022
Oct 10 23:36:30 np0005480824 iscsid[230877]: + exec /usr/sbin/iscsid -f
Oct 10 23:36:30 np0005480824 kernel: Loading iSCSI transport class v2.0-870.
Oct 10 23:36:31 np0005480824 python3.9[231075]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:36:31 np0005480824 python3.9[231237]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:32 np0005480824 python3.9[231389]: ansible-ansible.builtin.service_facts Invoked
Oct 10 23:36:32 np0005480824 network[231406]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 23:36:32 np0005480824 network[231407]: 'network-scripts' will be removed from distribution in near future.
Oct 10 23:36:32 np0005480824 network[231408]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 23:36:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:36:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:36:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:36:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:38 np0005480824 python3.9[231683]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.029410) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153799029477, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1723, "num_deletes": 250, "total_data_size": 2890273, "memory_usage": 2932440, "flush_reason": "Manual Compaction"}
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153799041063, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1634126, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11755, "largest_seqno": 13477, "table_properties": {"data_size": 1628432, "index_size": 2833, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14196, "raw_average_key_size": 20, "raw_value_size": 1615925, "raw_average_value_size": 2288, "num_data_blocks": 131, "num_entries": 706, "num_filter_entries": 706, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153604, "oldest_key_time": 1760153604, "file_creation_time": 1760153799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11700 microseconds, and 5023 cpu microseconds.
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.041118) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1634126 bytes OK
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.041139) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.043371) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.043385) EVENT_LOG_v1 {"time_micros": 1760153799043380, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.043404) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2882930, prev total WAL file size 2882930, number of live WAL files 2.
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.044360) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1595KB)], [29(7767KB)]
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153799044424, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9588477, "oldest_snapshot_seqno": -1}
Oct 10 23:36:39 np0005480824 python3.9[231835]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3986 keys, 7498234 bytes, temperature: kUnknown
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153799107081, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7498234, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7469773, "index_size": 17407, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 95033, "raw_average_key_size": 23, "raw_value_size": 7396052, "raw_average_value_size": 1855, "num_data_blocks": 759, "num_entries": 3986, "num_filter_entries": 3986, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760153799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.107350) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7498234 bytes
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.109866) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.8 rd, 119.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.6 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.5) write-amplify(4.6) OK, records in: 4406, records dropped: 420 output_compression: NoCompression
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.109888) EVENT_LOG_v1 {"time_micros": 1760153799109878, "job": 12, "event": "compaction_finished", "compaction_time_micros": 62749, "compaction_time_cpu_micros": 35023, "output_level": 6, "num_output_files": 1, "total_output_size": 7498234, "num_input_records": 4406, "num_output_records": 3986, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153799110277, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153799111871, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.044215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.111926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.111931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.111932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.111934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:36:39 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:36:39.111935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:36:39 np0005480824 podman[231963]: 2025-10-11 03:36:39.668525429 +0000 UTC m=+0.081819481 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 10 23:36:39 np0005480824 python3.9[232009]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:36:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:40 np0005480824 python3.9[232139]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153799.3095822-460-261198782214221/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:40 np0005480824 systemd[1]: Stopping User Manager for UID 0...
Oct 10 23:36:40 np0005480824 systemd[230931]: Activating special unit Exit the Session...
Oct 10 23:36:40 np0005480824 systemd[230931]: Stopped target Main User Target.
Oct 10 23:36:40 np0005480824 systemd[230931]: Stopped target Basic System.
Oct 10 23:36:40 np0005480824 systemd[230931]: Stopped target Paths.
Oct 10 23:36:40 np0005480824 systemd[230931]: Stopped target Sockets.
Oct 10 23:36:40 np0005480824 systemd[230931]: Stopped target Timers.
Oct 10 23:36:40 np0005480824 systemd[230931]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 10 23:36:40 np0005480824 systemd[230931]: Closed D-Bus User Message Bus Socket.
Oct 10 23:36:40 np0005480824 systemd[230931]: Stopped Create User's Volatile Files and Directories.
Oct 10 23:36:40 np0005480824 systemd[230931]: Removed slice User Application Slice.
Oct 10 23:36:40 np0005480824 systemd[230931]: Reached target Shutdown.
Oct 10 23:36:40 np0005480824 systemd[230931]: Finished Exit the Session.
Oct 10 23:36:40 np0005480824 systemd[230931]: Reached target Exit the Session.
Oct 10 23:36:40 np0005480824 systemd[1]: user@0.service: Deactivated successfully.
Oct 10 23:36:40 np0005480824 systemd[1]: Stopped User Manager for UID 0.
Oct 10 23:36:40 np0005480824 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 10 23:36:40 np0005480824 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 10 23:36:40 np0005480824 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 10 23:36:40 np0005480824 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 10 23:36:40 np0005480824 systemd[1]: Removed slice User Slice of UID 0.
Oct 10 23:36:41 np0005480824 python3.9[232293]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:42 np0005480824 python3.9[232445]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:36:42 np0005480824 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 10 23:36:42 np0005480824 systemd[1]: Stopped Load Kernel Modules.
Oct 10 23:36:42 np0005480824 systemd[1]: Stopping Load Kernel Modules...
Oct 10 23:36:42 np0005480824 systemd[1]: Starting Load Kernel Modules...
Oct 10 23:36:42 np0005480824 systemd[1]: Finished Load Kernel Modules.
Oct 10 23:36:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:42 np0005480824 python3.9[232601]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:36:43 np0005480824 python3.9[232753]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:36:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:36:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:44 np0005480824 python3.9[232905]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:36:44 np0005480824 podman[233029]: 2025-10-11 03:36:44.914830888 +0000 UTC m=+0.061571042 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:36:45 np0005480824 python3.9[233075]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:36:45 np0005480824 python3.9[233198]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153804.5548441-518-129450519187000/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:46 np0005480824 python3.9[233350]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:36:47 np0005480824 python3.9[233503]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:48 np0005480824 python3.9[233655]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:48 np0005480824 python3.9[233907]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:36:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:36:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:36:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:36:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:36:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:36:48 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 0be9d6ea-d0ef-4bc8-b094-57231f5110b8 does not exist
Oct 10 23:36:48 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev edb20788-90d6-4a09-96ac-a4f67163f0b9 does not exist
Oct 10 23:36:48 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 2be4de99-a913-41e4-9f7d-cdb82c22b43d does not exist
Oct 10 23:36:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:36:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:36:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:36:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:36:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:36:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:36:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:36:49 np0005480824 python3.9[234190]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:49 np0005480824 podman[234238]: 2025-10-11 03:36:49.461320391 +0000 UTC m=+0.041335911 container create c5e129ec37475417ff2655f92347b3df6824b2080038ea6b83a5177b6250fe6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williamson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 10 23:36:49 np0005480824 systemd[1]: Started libpod-conmon-c5e129ec37475417ff2655f92347b3df6824b2080038ea6b83a5177b6250fe6a.scope.
Oct 10 23:36:49 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:36:49 np0005480824 podman[234238]: 2025-10-11 03:36:49.442136537 +0000 UTC m=+0.022151987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:36:49 np0005480824 podman[234238]: 2025-10-11 03:36:49.545759844 +0000 UTC m=+0.125775324 container init c5e129ec37475417ff2655f92347b3df6824b2080038ea6b83a5177b6250fe6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:36:49 np0005480824 podman[234238]: 2025-10-11 03:36:49.55275285 +0000 UTC m=+0.132768260 container start c5e129ec37475417ff2655f92347b3df6824b2080038ea6b83a5177b6250fe6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williamson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:36:49 np0005480824 podman[234238]: 2025-10-11 03:36:49.556112299 +0000 UTC m=+0.136127739 container attach c5e129ec37475417ff2655f92347b3df6824b2080038ea6b83a5177b6250fe6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williamson, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:36:49 np0005480824 practical_williamson[234294]: 167 167
Oct 10 23:36:49 np0005480824 systemd[1]: libpod-c5e129ec37475417ff2655f92347b3df6824b2080038ea6b83a5177b6250fe6a.scope: Deactivated successfully.
Oct 10 23:36:49 np0005480824 podman[234238]: 2025-10-11 03:36:49.558280471 +0000 UTC m=+0.138295891 container died c5e129ec37475417ff2655f92347b3df6824b2080038ea6b83a5177b6250fe6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 10 23:36:49 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6a92a0aa30f936babbe571acc91c75179151210d32c91d050b538c22e50419a1-merged.mount: Deactivated successfully.
Oct 10 23:36:49 np0005480824 podman[234238]: 2025-10-11 03:36:49.594831757 +0000 UTC m=+0.174847177 container remove c5e129ec37475417ff2655f92347b3df6824b2080038ea6b83a5177b6250fe6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williamson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 23:36:49 np0005480824 systemd[1]: libpod-conmon-c5e129ec37475417ff2655f92347b3df6824b2080038ea6b83a5177b6250fe6a.scope: Deactivated successfully.
Oct 10 23:36:49 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:36:49 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:36:49 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:36:49 np0005480824 podman[234392]: 2025-10-11 03:36:49.746471273 +0000 UTC m=+0.044517627 container create 3a1b151775bccee63813c423f4f3ba52e26837616d2df331ded5c71f63c56c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:36:49 np0005480824 systemd[1]: Started libpod-conmon-3a1b151775bccee63813c423f4f3ba52e26837616d2df331ded5c71f63c56c81.scope.
Oct 10 23:36:49 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:36:49 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a641bad620640607c39b1c8a20a50e8c7c3eca561226d0593c43b55649b5c69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:49 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a641bad620640607c39b1c8a20a50e8c7c3eca561226d0593c43b55649b5c69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:49 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a641bad620640607c39b1c8a20a50e8c7c3eca561226d0593c43b55649b5c69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:49 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a641bad620640607c39b1c8a20a50e8c7c3eca561226d0593c43b55649b5c69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:49 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a641bad620640607c39b1c8a20a50e8c7c3eca561226d0593c43b55649b5c69/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:49 np0005480824 podman[234392]: 2025-10-11 03:36:49.730616917 +0000 UTC m=+0.028663271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:36:49 np0005480824 podman[234392]: 2025-10-11 03:36:49.824867122 +0000 UTC m=+0.122913506 container init 3a1b151775bccee63813c423f4f3ba52e26837616d2df331ded5c71f63c56c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 23:36:49 np0005480824 podman[234392]: 2025-10-11 03:36:49.832842932 +0000 UTC m=+0.130889276 container start 3a1b151775bccee63813c423f4f3ba52e26837616d2df331ded5c71f63c56c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:36:49 np0005480824 podman[234392]: 2025-10-11 03:36:49.835950985 +0000 UTC m=+0.133997339 container attach 3a1b151775bccee63813c423f4f3ba52e26837616d2df331ded5c71f63c56c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 10 23:36:49 np0005480824 python3.9[234438]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:50 np0005480824 python3.9[234593]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:50 np0005480824 unruffled_satoshi[234436]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:36:50 np0005480824 unruffled_satoshi[234436]: --> relative data size: 1.0
Oct 10 23:36:50 np0005480824 unruffled_satoshi[234436]: --> All data devices are unavailable
Oct 10 23:36:50 np0005480824 systemd[1]: libpod-3a1b151775bccee63813c423f4f3ba52e26837616d2df331ded5c71f63c56c81.scope: Deactivated successfully.
Oct 10 23:36:50 np0005480824 podman[234392]: 2025-10-11 03:36:50.879580253 +0000 UTC m=+1.177626607 container died 3a1b151775bccee63813c423f4f3ba52e26837616d2df331ded5c71f63c56c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 10 23:36:50 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0a641bad620640607c39b1c8a20a50e8c7c3eca561226d0593c43b55649b5c69-merged.mount: Deactivated successfully.
Oct 10 23:36:50 np0005480824 podman[234392]: 2025-10-11 03:36:50.936624166 +0000 UTC m=+1.234670520 container remove 3a1b151775bccee63813c423f4f3ba52e26837616d2df331ded5c71f63c56c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 23:36:50 np0005480824 systemd[1]: libpod-conmon-3a1b151775bccee63813c423f4f3ba52e26837616d2df331ded5c71f63c56c81.scope: Deactivated successfully.
Oct 10 23:36:51 np0005480824 python3.9[234807]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:51 np0005480824 podman[235046]: 2025-10-11 03:36:51.751487839 +0000 UTC m=+0.066781815 container create 1f0a8de5c77b915fa19ac98d71a7c9c75df9ecc5993fa8102a93da1cd62c235a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 10 23:36:51 np0005480824 systemd[1]: Started libpod-conmon-1f0a8de5c77b915fa19ac98d71a7c9c75df9ecc5993fa8102a93da1cd62c235a.scope.
Oct 10 23:36:51 np0005480824 podman[235046]: 2025-10-11 03:36:51.725679458 +0000 UTC m=+0.040973514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:36:51 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:36:51 np0005480824 podman[235046]: 2025-10-11 03:36:51.853509749 +0000 UTC m=+0.168803795 container init 1f0a8de5c77b915fa19ac98d71a7c9c75df9ecc5993fa8102a93da1cd62c235a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:36:51 np0005480824 podman[235046]: 2025-10-11 03:36:51.862257866 +0000 UTC m=+0.177551872 container start 1f0a8de5c77b915fa19ac98d71a7c9c75df9ecc5993fa8102a93da1cd62c235a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ellis, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 10 23:36:51 np0005480824 podman[235046]: 2025-10-11 03:36:51.871655379 +0000 UTC m=+0.186949455 container attach 1f0a8de5c77b915fa19ac98d71a7c9c75df9ecc5993fa8102a93da1cd62c235a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ellis, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:36:51 np0005480824 condescending_ellis[235096]: 167 167
Oct 10 23:36:51 np0005480824 systemd[1]: libpod-1f0a8de5c77b915fa19ac98d71a7c9c75df9ecc5993fa8102a93da1cd62c235a.scope: Deactivated successfully.
Oct 10 23:36:51 np0005480824 conmon[235096]: conmon 1f0a8de5c77b915fa19a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f0a8de5c77b915fa19ac98d71a7c9c75df9ecc5993fa8102a93da1cd62c235a.scope/container/memory.events
Oct 10 23:36:51 np0005480824 podman[235046]: 2025-10-11 03:36:51.876816341 +0000 UTC m=+0.192110337 container died 1f0a8de5c77b915fa19ac98d71a7c9c75df9ecc5993fa8102a93da1cd62c235a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:36:51 np0005480824 systemd[1]: var-lib-containers-storage-overlay-1842f6abb6d64b19e64368b598b3c72d5d59fa45c9b3cf7a33edced53d5ff6e3-merged.mount: Deactivated successfully.
Oct 10 23:36:51 np0005480824 podman[235046]: 2025-10-11 03:36:51.928583808 +0000 UTC m=+0.243877804 container remove 1f0a8de5c77b915fa19ac98d71a7c9c75df9ecc5993fa8102a93da1cd62c235a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:36:51 np0005480824 systemd[1]: libpod-conmon-1f0a8de5c77b915fa19ac98d71a7c9c75df9ecc5993fa8102a93da1cd62c235a.scope: Deactivated successfully.
Oct 10 23:36:51 np0005480824 python3.9[235093]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:36:52 np0005480824 podman[235121]: 2025-10-11 03:36:52.129108354 +0000 UTC m=+0.048674175 container create 431582112d7ff5ceab2a65822ea2828cef91372858ad31a9e57815f019ae3d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_rhodes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:36:52 np0005480824 systemd[1]: Started libpod-conmon-431582112d7ff5ceab2a65822ea2828cef91372858ad31a9e57815f019ae3d33.scope.
Oct 10 23:36:52 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:36:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e15673a344a006f0d7d8d903d9b8758d944056b571c851f3e6716ce338a8b93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e15673a344a006f0d7d8d903d9b8758d944056b571c851f3e6716ce338a8b93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e15673a344a006f0d7d8d903d9b8758d944056b571c851f3e6716ce338a8b93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e15673a344a006f0d7d8d903d9b8758d944056b571c851f3e6716ce338a8b93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:52 np0005480824 podman[235121]: 2025-10-11 03:36:52.110660336 +0000 UTC m=+0.030226167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:36:52 np0005480824 podman[235121]: 2025-10-11 03:36:52.228393778 +0000 UTC m=+0.147959599 container init 431582112d7ff5ceab2a65822ea2828cef91372858ad31a9e57815f019ae3d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 23:36:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:52 np0005480824 podman[235121]: 2025-10-11 03:36:52.235129377 +0000 UTC m=+0.154695158 container start 431582112d7ff5ceab2a65822ea2828cef91372858ad31a9e57815f019ae3d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 10 23:36:52 np0005480824 podman[235121]: 2025-10-11 03:36:52.238297003 +0000 UTC m=+0.157862794 container attach 431582112d7ff5ceab2a65822ea2828cef91372858ad31a9e57815f019ae3d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_rhodes, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:36:52 np0005480824 python3.9[235293]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]: {
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:    "0": [
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:        {
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "devices": [
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "/dev/loop3"
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            ],
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_name": "ceph_lv0",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_size": "21470642176",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "name": "ceph_lv0",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "tags": {
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.cluster_name": "ceph",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.crush_device_class": "",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.encrypted": "0",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.osd_id": "0",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.type": "block",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.vdo": "0"
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            },
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "type": "block",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "vg_name": "ceph_vg0"
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:        }
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:    ],
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:    "1": [
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:        {
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "devices": [
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "/dev/loop4"
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            ],
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_name": "ceph_lv1",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_size": "21470642176",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "name": "ceph_lv1",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "tags": {
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.cluster_name": "ceph",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.crush_device_class": "",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.encrypted": "0",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.osd_id": "1",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.type": "block",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.vdo": "0"
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            },
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "type": "block",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "vg_name": "ceph_vg1"
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:        }
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:    ],
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:    "2": [
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:        {
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "devices": [
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "/dev/loop5"
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            ],
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_name": "ceph_lv2",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_size": "21470642176",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "name": "ceph_lv2",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "tags": {
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.cluster_name": "ceph",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.crush_device_class": "",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.encrypted": "0",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.osd_id": "2",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.type": "block",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:                "ceph.vdo": "0"
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            },
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "type": "block",
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:            "vg_name": "ceph_vg2"
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:        }
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]:    ]
Oct 10 23:36:52 np0005480824 vigilant_rhodes[235162]: }
Oct 10 23:36:52 np0005480824 systemd[1]: libpod-431582112d7ff5ceab2a65822ea2828cef91372858ad31a9e57815f019ae3d33.scope: Deactivated successfully.
Oct 10 23:36:53 np0005480824 podman[235343]: 2025-10-11 03:36:53.066308418 +0000 UTC m=+0.045160931 container died 431582112d7ff5ceab2a65822ea2828cef91372858ad31a9e57815f019ae3d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:36:53 np0005480824 systemd[1]: var-lib-containers-storage-overlay-4e15673a344a006f0d7d8d903d9b8758d944056b571c851f3e6716ce338a8b93-merged.mount: Deactivated successfully.
Oct 10 23:36:53 np0005480824 podman[235343]: 2025-10-11 03:36:53.140659791 +0000 UTC m=+0.119512214 container remove 431582112d7ff5ceab2a65822ea2828cef91372858ad31a9e57815f019ae3d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:36:53 np0005480824 systemd[1]: libpod-conmon-431582112d7ff5ceab2a65822ea2828cef91372858ad31a9e57815f019ae3d33.scope: Deactivated successfully.
Oct 10 23:36:53 np0005480824 python3.9[235533]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:36:53 np0005480824 podman[235628]: 2025-10-11 03:36:53.845148397 +0000 UTC m=+0.044603959 container create c2882c25bb99b87ec97db6b1bf4e2351fbfe8a81767297c4e408ee419d9de6bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:36:53 np0005480824 systemd[1]: Started libpod-conmon-c2882c25bb99b87ec97db6b1bf4e2351fbfe8a81767297c4e408ee419d9de6bc.scope.
Oct 10 23:36:53 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:36:53 np0005480824 podman[235628]: 2025-10-11 03:36:53.827653933 +0000 UTC m=+0.027109495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:36:53 np0005480824 podman[235628]: 2025-10-11 03:36:53.9354941 +0000 UTC m=+0.134949692 container init c2882c25bb99b87ec97db6b1bf4e2351fbfe8a81767297c4e408ee419d9de6bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 10 23:36:53 np0005480824 podman[235628]: 2025-10-11 03:36:53.949957433 +0000 UTC m=+0.149413015 container start c2882c25bb99b87ec97db6b1bf4e2351fbfe8a81767297c4e408ee419d9de6bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:36:53 np0005480824 podman[235628]: 2025-10-11 03:36:53.953871526 +0000 UTC m=+0.153327098 container attach c2882c25bb99b87ec97db6b1bf4e2351fbfe8a81767297c4e408ee419d9de6bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:36:53 np0005480824 blissful_jennings[235689]: 167 167
Oct 10 23:36:53 np0005480824 systemd[1]: libpod-c2882c25bb99b87ec97db6b1bf4e2351fbfe8a81767297c4e408ee419d9de6bc.scope: Deactivated successfully.
Oct 10 23:36:53 np0005480824 podman[235628]: 2025-10-11 03:36:53.955732309 +0000 UTC m=+0.155187861 container died c2882c25bb99b87ec97db6b1bf4e2351fbfe8a81767297c4e408ee419d9de6bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:36:53 np0005480824 systemd[1]: var-lib-containers-storage-overlay-4042654446eb34d68a7a7fc340663ab7a742c082223d5d9ec905567a640477a4-merged.mount: Deactivated successfully.
Oct 10 23:36:54 np0005480824 podman[235628]: 2025-10-11 03:36:54.009766051 +0000 UTC m=+0.209221603 container remove c2882c25bb99b87ec97db6b1bf4e2351fbfe8a81767297c4e408ee419d9de6bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 10 23:36:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:36:54 np0005480824 systemd[1]: libpod-conmon-c2882c25bb99b87ec97db6b1bf4e2351fbfe8a81767297c4e408ee419d9de6bc.scope: Deactivated successfully.
Oct 10 23:36:54 np0005480824 podman[235793]: 2025-10-11 03:36:54.203632268 +0000 UTC m=+0.048029050 container create 2a5c73b063992ec5907359f24102f2a5591173b5648c4e29e812f585e2c51a13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:36:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:54 np0005480824 systemd[1]: Started libpod-conmon-2a5c73b063992ec5907359f24102f2a5591173b5648c4e29e812f585e2c51a13.scope.
Oct 10 23:36:54 np0005480824 podman[235793]: 2025-10-11 03:36:54.182480296 +0000 UTC m=+0.026877168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:36:54 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:36:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fe46be714b0a19b28062743dcd5bf16a799096e429465985e78c9660f896b9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fe46be714b0a19b28062743dcd5bf16a799096e429465985e78c9660f896b9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fe46be714b0a19b28062743dcd5bf16a799096e429465985e78c9660f896b9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fe46be714b0a19b28062743dcd5bf16a799096e429465985e78c9660f896b9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:36:54 np0005480824 podman[235793]: 2025-10-11 03:36:54.328529871 +0000 UTC m=+0.172926733 container init 2a5c73b063992ec5907359f24102f2a5591173b5648c4e29e812f585e2c51a13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_proskuriakova, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:36:54 np0005480824 podman[235793]: 2025-10-11 03:36:54.336976181 +0000 UTC m=+0.181372973 container start 2a5c73b063992ec5907359f24102f2a5591173b5648c4e29e812f585e2c51a13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_proskuriakova, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 23:36:54 np0005480824 podman[235793]: 2025-10-11 03:36:54.340136965 +0000 UTC m=+0.184533837 container attach 2a5c73b063992ec5907359f24102f2a5591173b5648c4e29e812f585e2c51a13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:36:54 np0005480824 python3.9[235805]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:36:54 np0005480824 python3.9[235893]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]: {
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "osd_id": 0,
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "type": "bluestore"
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:    },
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "osd_id": 1,
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "type": "bluestore"
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:    },
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "osd_id": 2,
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:        "type": "bluestore"
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]:    }
Oct 10 23:36:55 np0005480824 nostalgic_proskuriakova[235811]: }
Oct 10 23:36:55 np0005480824 systemd[1]: libpod-2a5c73b063992ec5907359f24102f2a5591173b5648c4e29e812f585e2c51a13.scope: Deactivated successfully.
Oct 10 23:36:55 np0005480824 podman[235793]: 2025-10-11 03:36:55.306657705 +0000 UTC m=+1.151054517 container died 2a5c73b063992ec5907359f24102f2a5591173b5648c4e29e812f585e2c51a13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_proskuriakova, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 10 23:36:55 np0005480824 systemd[1]: var-lib-containers-storage-overlay-2fe46be714b0a19b28062743dcd5bf16a799096e429465985e78c9660f896b9d-merged.mount: Deactivated successfully.
Oct 10 23:36:55 np0005480824 podman[235793]: 2025-10-11 03:36:55.356202219 +0000 UTC m=+1.200599011 container remove 2a5c73b063992ec5907359f24102f2a5591173b5648c4e29e812f585e2c51a13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:36:55 np0005480824 systemd[1]: libpod-conmon-2a5c73b063992ec5907359f24102f2a5591173b5648c4e29e812f585e2c51a13.scope: Deactivated successfully.
Oct 10 23:36:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:36:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:36:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:36:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:36:55 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 59aac5d4-14af-4d4a-b9fa-6bd295a844cf does not exist
Oct 10 23:36:55 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 1f934e06-aad0-4ca1-9702-3531139ba912 does not exist
Oct 10 23:36:55 np0005480824 python3.9[236087]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:36:55 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:36:55 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:36:55 np0005480824 python3.9[236215]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:36:56 np0005480824 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 10 23:36:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:56 np0005480824 python3.9[236368]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:57 np0005480824 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 10 23:36:57 np0005480824 python3.9[236521]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:36:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:36:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:36:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:36:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:36:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:36:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:36:58 np0005480824 python3.9[236599]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:36:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:36:58 np0005480824 python3.9[236751]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:36:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:36:59 np0005480824 python3.9[236829]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:37:00 np0005480824 python3.9[236981]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:37:00 np0005480824 systemd[1]: Reloading.
Oct 10 23:37:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:00 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:37:00 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:37:00 np0005480824 podman[237018]: 2025-10-11 03:37:00.572368421 +0000 UTC m=+0.079999192 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:37:01 np0005480824 python3.9[237188]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:37:01 np0005480824 python3.9[237266]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:37:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:02 np0005480824 python3.9[237418]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:37:02 np0005480824 python3.9[237496]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:37:03 np0005480824 python3.9[237648]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:37:03 np0005480824 systemd[1]: Reloading.
Oct 10 23:37:03 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:37:03 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:37:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:37:04 np0005480824 systemd[1]: Starting Create netns directory...
Oct 10 23:37:04 np0005480824 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 23:37:04 np0005480824 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 23:37:04 np0005480824 systemd[1]: Finished Create netns directory.
Oct 10 23:37:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:05 np0005480824 python3.9[237842]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:37:05 np0005480824 python3.9[237994]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:37:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:06 np0005480824 python3.9[238117]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153825.2381048-725-36837008359538/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:37:07 np0005480824 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 10 23:37:07 np0005480824 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 10 23:37:07 np0005480824 python3.9[238269]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:37:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:08 np0005480824 python3.9[238423]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:37:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:37:09 np0005480824 python3.9[238546]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153827.7655911-750-247997606883199/.source.json _original_basename=.r6wfmgsx follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:37:09 np0005480824 python3.9[238698]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:37:10 np0005480824 podman[238702]: 2025-10-11 03:37:10.048278884 +0000 UTC m=+0.096562561 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 10 23:37:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:37:10.474 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:37:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:37:10.474 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:37:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:37:10.474 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:37:12 np0005480824 python3.9[239153]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 10 23:37:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:12 np0005480824 python3.9[239305]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 23:37:13 np0005480824 python3.9[239457]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 23:37:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:37:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:15 np0005480824 python3[239635]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 23:37:16 np0005480824 podman[239662]: 2025-10-11 03:37:16.00952388 +0000 UTC m=+0.066956835 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 10 23:37:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:16 np0005480824 podman[239649]: 2025-10-11 03:37:16.679571593 +0000 UTC m=+1.381675444 image pull afce23cfe475a7c4b16d233ab936a7b07069ccb13842b1c95ba43e4b3f92adfb quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 10 23:37:16 np0005480824 podman[239728]: 2025-10-11 03:37:16.885381151 +0000 UTC m=+0.085108722 container create 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:37:16 np0005480824 podman[239728]: 2025-10-11 03:37:16.846158329 +0000 UTC m=+0.045885940 image pull afce23cfe475a7c4b16d233ab936a7b07069ccb13842b1c95ba43e4b3f92adfb quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 10 23:37:16 np0005480824 python3[239635]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 10 23:37:17 np0005480824 python3.9[239918]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:37:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:18 np0005480824 python3.9[240072]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:37:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:37:19 np0005480824 python3.9[240148]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:37:19 np0005480824 python3.9[240299]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760153839.3376217-838-101643496142735/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:37:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:20 np0005480824 python3.9[240375]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 23:37:20 np0005480824 systemd[1]: Reloading.
Oct 10 23:37:20 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:37:20 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:37:21 np0005480824 python3.9[240487]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:37:21 np0005480824 systemd[1]: Reloading.
Oct 10 23:37:21 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:37:21 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:37:22 np0005480824 systemd[1]: Starting multipathd container...
Oct 10 23:37:22 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:37:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf1984d7f448edcb8e44e3fc0fa0ef8458bf44e8ae8bf80b781f24b3a332d6b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 23:37:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf1984d7f448edcb8e44e3fc0fa0ef8458bf44e8ae8bf80b781f24b3a332d6b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 23:37:22 np0005480824 systemd[1]: Started /usr/bin/podman healthcheck run 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8.
Oct 10 23:37:22 np0005480824 podman[240527]: 2025-10-11 03:37:22.187865979 +0000 UTC m=+0.138746463 container init 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 10 23:37:22 np0005480824 multipathd[240543]: + sudo -E kolla_set_configs
Oct 10 23:37:22 np0005480824 podman[240527]: 2025-10-11 03:37:22.221385527 +0000 UTC m=+0.172265971 container start 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Oct 10 23:37:22 np0005480824 podman[240527]: multipathd
Oct 10 23:37:22 np0005480824 systemd[1]: Started multipathd container.
Oct 10 23:37:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:22 np0005480824 multipathd[240543]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 23:37:22 np0005480824 multipathd[240543]: INFO:__main__:Validating config file
Oct 10 23:37:22 np0005480824 multipathd[240543]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 23:37:22 np0005480824 multipathd[240543]: INFO:__main__:Writing out command to execute
Oct 10 23:37:22 np0005480824 multipathd[240543]: ++ cat /run_command
Oct 10 23:37:22 np0005480824 multipathd[240543]: + CMD='/usr/sbin/multipathd -d'
Oct 10 23:37:22 np0005480824 multipathd[240543]: + ARGS=
Oct 10 23:37:22 np0005480824 multipathd[240543]: + sudo kolla_copy_cacerts
Oct 10 23:37:22 np0005480824 multipathd[240543]: + [[ ! -n '' ]]
Oct 10 23:37:22 np0005480824 multipathd[240543]: + . kolla_extend_start
Oct 10 23:37:22 np0005480824 multipathd[240543]: Running command: '/usr/sbin/multipathd -d'
Oct 10 23:37:22 np0005480824 multipathd[240543]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 10 23:37:22 np0005480824 multipathd[240543]: + umask 0022
Oct 10 23:37:22 np0005480824 multipathd[240543]: + exec /usr/sbin/multipathd -d
Oct 10 23:37:22 np0005480824 podman[240550]: 2025-10-11 03:37:22.314735232 +0000 UTC m=+0.080967805 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 10 23:37:22 np0005480824 systemd[1]: 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8-23c203ccd824abcd.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 23:37:22 np0005480824 systemd[1]: 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8-23c203ccd824abcd.service: Failed with result 'exit-code'.
Oct 10 23:37:22 np0005480824 multipathd[240543]: 3310.037710 | --------start up--------
Oct 10 23:37:22 np0005480824 multipathd[240543]: 3310.037725 | read /etc/multipath.conf
Oct 10 23:37:22 np0005480824 multipathd[240543]: 3310.044747 | path checkers start up
Oct 10 23:37:23 np0005480824 python3.9[240732]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:37:23 np0005480824 python3.9[240886]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:37:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:37:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:24 np0005480824 python3.9[241050]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:37:24 np0005480824 systemd[1]: Stopping multipathd container...
Oct 10 23:37:24 np0005480824 multipathd[240543]: 3312.452428 | exit (signal)
Oct 10 23:37:24 np0005480824 multipathd[240543]: 3312.452884 | --------shut down-------
Oct 10 23:37:24 np0005480824 systemd[1]: libpod-8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8.scope: Deactivated successfully.
Oct 10 23:37:24 np0005480824 podman[241054]: 2025-10-11 03:37:24.780762987 +0000 UTC m=+0.091049742 container died 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:37:24 np0005480824 systemd[1]: 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8-23c203ccd824abcd.timer: Deactivated successfully.
Oct 10 23:37:24 np0005480824 systemd[1]: Stopped /usr/bin/podman healthcheck run 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8.
Oct 10 23:37:24 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8-userdata-shm.mount: Deactivated successfully.
Oct 10 23:37:24 np0005480824 systemd[1]: var-lib-containers-storage-overlay-daf1984d7f448edcb8e44e3fc0fa0ef8458bf44e8ae8bf80b781f24b3a332d6b-merged.mount: Deactivated successfully.
Oct 10 23:37:24 np0005480824 podman[241054]: 2025-10-11 03:37:24.916413736 +0000 UTC m=+0.226700501 container cleanup 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd)
Oct 10 23:37:24 np0005480824 podman[241054]: multipathd
Oct 10 23:37:24 np0005480824 podman[241083]: multipathd
Oct 10 23:37:24 np0005480824 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 10 23:37:24 np0005480824 systemd[1]: Stopped multipathd container.
Oct 10 23:37:25 np0005480824 systemd[1]: Starting multipathd container...
Oct 10 23:37:25 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:37:25 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf1984d7f448edcb8e44e3fc0fa0ef8458bf44e8ae8bf80b781f24b3a332d6b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 23:37:25 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf1984d7f448edcb8e44e3fc0fa0ef8458bf44e8ae8bf80b781f24b3a332d6b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 23:37:25 np0005480824 systemd[1]: Started /usr/bin/podman healthcheck run 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8.
Oct 10 23:37:25 np0005480824 podman[241096]: 2025-10-11 03:37:25.14332905 +0000 UTC m=+0.126193637 container init 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 10 23:37:25 np0005480824 multipathd[241112]: + sudo -E kolla_set_configs
Oct 10 23:37:25 np0005480824 podman[241096]: 2025-10-11 03:37:25.178413675 +0000 UTC m=+0.161278222 container start 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 23:37:25 np0005480824 podman[241096]: multipathd
Oct 10 23:37:25 np0005480824 systemd[1]: Started multipathd container.
Oct 10 23:37:25 np0005480824 multipathd[241112]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 23:37:25 np0005480824 multipathd[241112]: INFO:__main__:Validating config file
Oct 10 23:37:25 np0005480824 multipathd[241112]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 23:37:25 np0005480824 multipathd[241112]: INFO:__main__:Writing out command to execute
Oct 10 23:37:25 np0005480824 multipathd[241112]: ++ cat /run_command
Oct 10 23:37:25 np0005480824 multipathd[241112]: + CMD='/usr/sbin/multipathd -d'
Oct 10 23:37:25 np0005480824 multipathd[241112]: + ARGS=
Oct 10 23:37:25 np0005480824 multipathd[241112]: + sudo kolla_copy_cacerts
Oct 10 23:37:25 np0005480824 podman[241119]: 2025-10-11 03:37:25.265528233 +0000 UTC m=+0.078022946 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:37:25 np0005480824 multipathd[241112]: + [[ ! -n '' ]]
Oct 10 23:37:25 np0005480824 multipathd[241112]: + . kolla_extend_start
Oct 10 23:37:25 np0005480824 multipathd[241112]: Running command: '/usr/sbin/multipathd -d'
Oct 10 23:37:25 np0005480824 systemd[1]: 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8-5d762caefe58de62.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 23:37:25 np0005480824 multipathd[241112]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 10 23:37:25 np0005480824 multipathd[241112]: + umask 0022
Oct 10 23:37:25 np0005480824 multipathd[241112]: + exec /usr/sbin/multipathd -d
Oct 10 23:37:25 np0005480824 systemd[1]: 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8-5d762caefe58de62.service: Failed with result 'exit-code'.
Oct 10 23:37:25 np0005480824 multipathd[241112]: 3313.000685 | --------start up--------
Oct 10 23:37:25 np0005480824 multipathd[241112]: 3313.000704 | read /etc/multipath.conf
Oct 10 23:37:25 np0005480824 multipathd[241112]: 3313.007056 | path checkers start up
Oct 10 23:37:25 np0005480824 python3.9[241303]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:37:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:26 np0005480824 python3.9[241455]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 23:37:27 np0005480824 python3.9[241607]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 10 23:37:27 np0005480824 kernel: Key type psk registered
Oct 10 23:37:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:37:27
Oct 10 23:37:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:37:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:37:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta']
Oct 10 23:37:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:37:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:37:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:37:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:37:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:37:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:37:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:37:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:37:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:37:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:37:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:37:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:37:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:37:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:37:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:37:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:37:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:37:28 np0005480824 python3.9[241768]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:37:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:28 np0005480824 python3.9[241891]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760153847.6897795-918-276805669227360/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:37:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:37:29 np0005480824 python3.9[242043]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:37:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:30 np0005480824 python3.9[242195]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:37:30 np0005480824 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 10 23:37:30 np0005480824 systemd[1]: Stopped Load Kernel Modules.
Oct 10 23:37:30 np0005480824 systemd[1]: Stopping Load Kernel Modules...
Oct 10 23:37:30 np0005480824 systemd[1]: Starting Load Kernel Modules...
Oct 10 23:37:30 np0005480824 systemd[1]: Finished Load Kernel Modules.
Oct 10 23:37:31 np0005480824 podman[242299]: 2025-10-11 03:37:31.015069852 +0000 UTC m=+0.070204973 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009)
Oct 10 23:37:31 np0005480824 python3.9[242371]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 23:37:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:32 np0005480824 python3.9[242455]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 23:37:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:37:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:37:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:37:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:38 np0005480824 systemd[1]: Reloading.
Oct 10 23:37:38 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:37:38 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:37:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:37:39 np0005480824 systemd[1]: Reloading.
Oct 10 23:37:39 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:37:39 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:37:39 np0005480824 systemd-logind[782]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 10 23:37:39 np0005480824 systemd-logind[782]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 10 23:37:39 np0005480824 lvm[242570]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 10 23:37:39 np0005480824 lvm[242570]: VG ceph_vg2 finished
Oct 10 23:37:39 np0005480824 lvm[242569]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 10 23:37:39 np0005480824 lvm[242569]: VG ceph_vg1 finished
Oct 10 23:37:39 np0005480824 lvm[242572]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 23:37:39 np0005480824 lvm[242572]: VG ceph_vg0 finished
Oct 10 23:37:39 np0005480824 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 23:37:39 np0005480824 systemd[1]: Starting man-db-cache-update.service...
Oct 10 23:37:39 np0005480824 systemd[1]: Reloading.
Oct 10 23:37:39 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:37:39 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:37:40 np0005480824 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 23:37:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:40 np0005480824 podman[242779]: 2025-10-11 03:37:40.287184645 +0000 UTC m=+0.115427735 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:37:41 np0005480824 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 23:37:41 np0005480824 systemd[1]: Finished man-db-cache-update.service.
Oct 10 23:37:41 np0005480824 systemd[1]: man-db-cache-update.service: Consumed 1.650s CPU time.
Oct 10 23:37:41 np0005480824 systemd[1]: run-rbabdbc761b5e42abaf7bc459dd166d4f.service: Deactivated successfully.
Oct 10 23:37:41 np0005480824 python3.9[243940]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:37:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:42 np0005480824 python3.9[244091]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 23:37:43 np0005480824 python3.9[244247]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:37:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:37:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:44 np0005480824 python3.9[244399]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 23:37:44 np0005480824 systemd[1]: Reloading.
Oct 10 23:37:45 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:37:45 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:37:46 np0005480824 python3.9[244583]: ansible-ansible.builtin.service_facts Invoked
Oct 10 23:37:46 np0005480824 network[244600]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 23:37:46 np0005480824 network[244601]: 'network-scripts' will be removed from distribution in near future.
Oct 10 23:37:46 np0005480824 network[244602]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 23:37:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:46 np0005480824 podman[244607]: 2025-10-11 03:37:46.292467747 +0000 UTC m=+0.073714134 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:37:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:37:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:52 np0005480824 python3.9[244901]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:37:53 np0005480824 python3.9[245054]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:37:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:37:54 np0005480824 python3.9[245207]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:37:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:55 np0005480824 python3.9[245360]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:37:55 np0005480824 podman[245508]: 2025-10-11 03:37:55.776780108 +0000 UTC m=+0.136765436 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 10 23:37:56 np0005480824 python3.9[245580]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:37:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:37:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:37:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:37:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:37:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:37:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:37:56 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 02c9bd6e-cb07-422a-8541-0fd2d0e45d3c does not exist
Oct 10 23:37:56 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev ee0350e7-b62b-44cb-9366-895f06eb7d4c does not exist
Oct 10 23:37:56 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev fd75ed83-96c4-4f86-b9cb-b22da775b82a does not exist
Oct 10 23:37:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:37:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:37:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:37:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:37:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:37:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:37:56 np0005480824 python3.9[245829]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:37:57 np0005480824 podman[245960]: 2025-10-11 03:37:57.161775058 +0000 UTC m=+0.044841065 container create f4d147ae98c404813773d18644fb8b574b318c4d7ec9783918fdc4dc845d6dda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:37:57 np0005480824 systemd[1]: Started libpod-conmon-f4d147ae98c404813773d18644fb8b574b318c4d7ec9783918fdc4dc845d6dda.scope.
Oct 10 23:37:57 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:37:57 np0005480824 podman[245960]: 2025-10-11 03:37:57.143857827 +0000 UTC m=+0.026923854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:37:57 np0005480824 podman[245960]: 2025-10-11 03:37:57.25967265 +0000 UTC m=+0.142738757 container init f4d147ae98c404813773d18644fb8b574b318c4d7ec9783918fdc4dc845d6dda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_curran, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:37:57 np0005480824 podman[245960]: 2025-10-11 03:37:57.272419479 +0000 UTC m=+0.155485526 container start f4d147ae98c404813773d18644fb8b574b318c4d7ec9783918fdc4dc845d6dda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_curran, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:37:57 np0005480824 podman[245960]: 2025-10-11 03:37:57.278004111 +0000 UTC m=+0.161070318 container attach f4d147ae98c404813773d18644fb8b574b318c4d7ec9783918fdc4dc845d6dda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:37:57 np0005480824 beautiful_curran[245976]: 167 167
Oct 10 23:37:57 np0005480824 systemd[1]: libpod-f4d147ae98c404813773d18644fb8b574b318c4d7ec9783918fdc4dc845d6dda.scope: Deactivated successfully.
Oct 10 23:37:57 np0005480824 podman[245960]: 2025-10-11 03:37:57.280163071 +0000 UTC m=+0.163229108 container died f4d147ae98c404813773d18644fb8b574b318c4d7ec9783918fdc4dc845d6dda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_curran, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 10 23:37:57 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d03715aa559f795143d422973bce20f05c19295287625758da5ec60ea05b6765-merged.mount: Deactivated successfully.
Oct 10 23:37:57 np0005480824 podman[245960]: 2025-10-11 03:37:57.328224212 +0000 UTC m=+0.211290229 container remove f4d147ae98c404813773d18644fb8b574b318c4d7ec9783918fdc4dc845d6dda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 23:37:57 np0005480824 systemd[1]: libpod-conmon-f4d147ae98c404813773d18644fb8b574b318c4d7ec9783918fdc4dc845d6dda.scope: Deactivated successfully.
Oct 10 23:37:57 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:37:57 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:37:57 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:37:57 np0005480824 podman[246000]: 2025-10-11 03:37:57.562706124 +0000 UTC m=+0.057897542 container create 15bc856294328389f321427eb11d4e214cee771a64aad00662e25c202b70e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:37:57 np0005480824 systemd[1]: Started libpod-conmon-15bc856294328389f321427eb11d4e214cee771a64aad00662e25c202b70e043.scope.
Oct 10 23:37:57 np0005480824 podman[246000]: 2025-10-11 03:37:57.534536732 +0000 UTC m=+0.029728210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:37:57 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:37:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5fc56a35109b61b35f090a37872269b5a5ee062cec3da95f235e4ab0214428d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:37:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5fc56a35109b61b35f090a37872269b5a5ee062cec3da95f235e4ab0214428d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:37:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5fc56a35109b61b35f090a37872269b5a5ee062cec3da95f235e4ab0214428d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:37:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5fc56a35109b61b35f090a37872269b5a5ee062cec3da95f235e4ab0214428d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:37:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5fc56a35109b61b35f090a37872269b5a5ee062cec3da95f235e4ab0214428d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:37:57 np0005480824 podman[246000]: 2025-10-11 03:37:57.693650743 +0000 UTC m=+0.188842181 container init 15bc856294328389f321427eb11d4e214cee771a64aad00662e25c202b70e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 23:37:57 np0005480824 podman[246000]: 2025-10-11 03:37:57.706095655 +0000 UTC m=+0.201287063 container start 15bc856294328389f321427eb11d4e214cee771a64aad00662e25c202b70e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_varahamihira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct 10 23:37:57 np0005480824 podman[246000]: 2025-10-11 03:37:57.710769485 +0000 UTC m=+0.205960903 container attach 15bc856294328389f321427eb11d4e214cee771a64aad00662e25c202b70e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:37:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:37:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:37:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:37:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:37:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:37:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:37:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:37:58 np0005480824 python3.9[246173]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:37:58 np0005480824 wonderful_varahamihira[246017]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:37:58 np0005480824 wonderful_varahamihira[246017]: --> relative data size: 1.0
Oct 10 23:37:58 np0005480824 wonderful_varahamihira[246017]: --> All data devices are unavailable
Oct 10 23:37:58 np0005480824 systemd[1]: libpod-15bc856294328389f321427eb11d4e214cee771a64aad00662e25c202b70e043.scope: Deactivated successfully.
Oct 10 23:37:58 np0005480824 systemd[1]: libpod-15bc856294328389f321427eb11d4e214cee771a64aad00662e25c202b70e043.scope: Consumed 1.078s CPU time.
Oct 10 23:37:58 np0005480824 podman[246000]: 2025-10-11 03:37:58.855504577 +0000 UTC m=+1.350695965 container died 15bc856294328389f321427eb11d4e214cee771a64aad00662e25c202b70e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_varahamihira, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 10 23:37:58 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c5fc56a35109b61b35f090a37872269b5a5ee062cec3da95f235e4ab0214428d-merged.mount: Deactivated successfully.
Oct 10 23:37:58 np0005480824 podman[246000]: 2025-10-11 03:37:58.925873351 +0000 UTC m=+1.421064739 container remove 15bc856294328389f321427eb11d4e214cee771a64aad00662e25c202b70e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_varahamihira, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 10 23:37:58 np0005480824 systemd[1]: libpod-conmon-15bc856294328389f321427eb11d4e214cee771a64aad00662e25c202b70e043.scope: Deactivated successfully.
Oct 10 23:37:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:37:59 np0005480824 podman[246505]: 2025-10-11 03:37:59.686166735 +0000 UTC m=+0.045246304 container create 0e0d73ce44b05e155a6af2f2ab1a1e5641ca74c23970eca2d8a2cb8fd21cd4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_gates, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:37:59 np0005480824 systemd[1]: Started libpod-conmon-0e0d73ce44b05e155a6af2f2ab1a1e5641ca74c23970eca2d8a2cb8fd21cd4fb.scope.
Oct 10 23:37:59 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:37:59 np0005480824 python3.9[246465]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:37:59 np0005480824 podman[246505]: 2025-10-11 03:37:59.751458581 +0000 UTC m=+0.110538220 container init 0e0d73ce44b05e155a6af2f2ab1a1e5641ca74c23970eca2d8a2cb8fd21cd4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_gates, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 10 23:37:59 np0005480824 podman[246505]: 2025-10-11 03:37:59.763615616 +0000 UTC m=+0.122695215 container start 0e0d73ce44b05e155a6af2f2ab1a1e5641ca74c23970eca2d8a2cb8fd21cd4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 10 23:37:59 np0005480824 podman[246505]: 2025-10-11 03:37:59.668034619 +0000 UTC m=+0.027114218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:37:59 np0005480824 podman[246505]: 2025-10-11 03:37:59.767407685 +0000 UTC m=+0.126487294 container attach 0e0d73ce44b05e155a6af2f2ab1a1e5641ca74c23970eca2d8a2cb8fd21cd4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_gates, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 23:37:59 np0005480824 upbeat_gates[246521]: 167 167
Oct 10 23:37:59 np0005480824 systemd[1]: libpod-0e0d73ce44b05e155a6af2f2ab1a1e5641ca74c23970eca2d8a2cb8fd21cd4fb.scope: Deactivated successfully.
Oct 10 23:37:59 np0005480824 podman[246505]: 2025-10-11 03:37:59.771481721 +0000 UTC m=+0.130561280 container died 0e0d73ce44b05e155a6af2f2ab1a1e5641ca74c23970eca2d8a2cb8fd21cd4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_gates, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:37:59 np0005480824 systemd[1]: var-lib-containers-storage-overlay-8c776993087325b1c98928f50e99244a2f5e4d1ac0708146eb58e3f2136c7750-merged.mount: Deactivated successfully.
Oct 10 23:37:59 np0005480824 podman[246505]: 2025-10-11 03:37:59.805307146 +0000 UTC m=+0.164386705 container remove 0e0d73ce44b05e155a6af2f2ab1a1e5641ca74c23970eca2d8a2cb8fd21cd4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_gates, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 10 23:37:59 np0005480824 systemd[1]: libpod-conmon-0e0d73ce44b05e155a6af2f2ab1a1e5641ca74c23970eca2d8a2cb8fd21cd4fb.scope: Deactivated successfully.
Oct 10 23:38:00 np0005480824 podman[246569]: 2025-10-11 03:38:00.026463005 +0000 UTC m=+0.073482219 container create 25a3bfc66f15500f6d8c970b71c35a4904e0539e3152f0ccfebf6f4e27dd2b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:38:00 np0005480824 systemd[1]: Started libpod-conmon-25a3bfc66f15500f6d8c970b71c35a4904e0539e3152f0ccfebf6f4e27dd2b66.scope.
Oct 10 23:38:00 np0005480824 podman[246569]: 2025-10-11 03:37:59.997528835 +0000 UTC m=+0.044548109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:38:00 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:38:00 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19408e26d1f32df0f14399858c6166e92c2a4b52bfce9730dd2034eb5ad32f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:38:00 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19408e26d1f32df0f14399858c6166e92c2a4b52bfce9730dd2034eb5ad32f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:38:00 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19408e26d1f32df0f14399858c6166e92c2a4b52bfce9730dd2034eb5ad32f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:38:00 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19408e26d1f32df0f14399858c6166e92c2a4b52bfce9730dd2034eb5ad32f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:38:00 np0005480824 podman[246569]: 2025-10-11 03:38:00.158224623 +0000 UTC m=+0.205243817 container init 25a3bfc66f15500f6d8c970b71c35a4904e0539e3152f0ccfebf6f4e27dd2b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:38:00 np0005480824 podman[246569]: 2025-10-11 03:38:00.169756794 +0000 UTC m=+0.216775998 container start 25a3bfc66f15500f6d8c970b71c35a4904e0539e3152f0ccfebf6f4e27dd2b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:38:00 np0005480824 podman[246569]: 2025-10-11 03:38:00.175892349 +0000 UTC m=+0.222911563 container attach 25a3bfc66f15500f6d8c970b71c35a4904e0539e3152f0ccfebf6f4e27dd2b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:38:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:00 np0005480824 python3.9[246717]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]: {
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:    "0": [
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:        {
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "devices": [
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "/dev/loop3"
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            ],
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_name": "ceph_lv0",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_size": "21470642176",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "name": "ceph_lv0",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "tags": {
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.cluster_name": "ceph",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.crush_device_class": "",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.encrypted": "0",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.osd_id": "0",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.type": "block",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.vdo": "0"
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            },
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "type": "block",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "vg_name": "ceph_vg0"
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:        }
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:    ],
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:    "1": [
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:        {
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "devices": [
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "/dev/loop4"
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            ],
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_name": "ceph_lv1",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_size": "21470642176",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "name": "ceph_lv1",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "tags": {
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.cluster_name": "ceph",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.crush_device_class": "",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.encrypted": "0",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.osd_id": "1",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.type": "block",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.vdo": "0"
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            },
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "type": "block",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "vg_name": "ceph_vg1"
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:        }
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:    ],
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:    "2": [
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:        {
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "devices": [
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "/dev/loop5"
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            ],
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_name": "ceph_lv2",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_size": "21470642176",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "name": "ceph_lv2",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "tags": {
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.cluster_name": "ceph",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.crush_device_class": "",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.encrypted": "0",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.osd_id": "2",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.type": "block",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:                "ceph.vdo": "0"
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            },
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "type": "block",
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:            "vg_name": "ceph_vg2"
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:        }
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]:    ]
Oct 10 23:38:00 np0005480824 dreamy_mendel[246608]: }
Oct 10 23:38:00 np0005480824 systemd[1]: libpod-25a3bfc66f15500f6d8c970b71c35a4904e0539e3152f0ccfebf6f4e27dd2b66.scope: Deactivated successfully.
Oct 10 23:38:00 np0005480824 podman[246569]: 2025-10-11 03:38:00.968874801 +0000 UTC m=+1.015894015 container died 25a3bfc66f15500f6d8c970b71c35a4904e0539e3152f0ccfebf6f4e27dd2b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:38:01 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c19408e26d1f32df0f14399858c6166e92c2a4b52bfce9730dd2034eb5ad32f5-merged.mount: Deactivated successfully.
Oct 10 23:38:01 np0005480824 podman[246569]: 2025-10-11 03:38:01.040140196 +0000 UTC m=+1.087159380 container remove 25a3bfc66f15500f6d8c970b71c35a4904e0539e3152f0ccfebf6f4e27dd2b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 23:38:01 np0005480824 systemd[1]: libpod-conmon-25a3bfc66f15500f6d8c970b71c35a4904e0539e3152f0ccfebf6f4e27dd2b66.scope: Deactivated successfully.
Oct 10 23:38:01 np0005480824 podman[246880]: 2025-10-11 03:38:01.172388015 +0000 UTC m=+0.081488846 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 23:38:01 np0005480824 python3.9[246897]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:01 np0005480824 podman[247186]: 2025-10-11 03:38:01.830412355 +0000 UTC m=+0.043319760 container create bffffc6b405c9abb06cfd795a9ba3a0132c0aa7a2661aeebcc15e87bd0cdc10f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_williamson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 10 23:38:01 np0005480824 systemd[1]: Started libpod-conmon-bffffc6b405c9abb06cfd795a9ba3a0132c0aa7a2661aeebcc15e87bd0cdc10f.scope.
Oct 10 23:38:01 np0005480824 podman[247186]: 2025-10-11 03:38:01.81401283 +0000 UTC m=+0.026920245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:38:01 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:38:01 np0005480824 podman[247186]: 2025-10-11 03:38:01.940930483 +0000 UTC m=+0.153837898 container init bffffc6b405c9abb06cfd795a9ba3a0132c0aa7a2661aeebcc15e87bd0cdc10f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 10 23:38:01 np0005480824 podman[247186]: 2025-10-11 03:38:01.951525883 +0000 UTC m=+0.164433318 container start bffffc6b405c9abb06cfd795a9ba3a0132c0aa7a2661aeebcc15e87bd0cdc10f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_williamson, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 10 23:38:01 np0005480824 podman[247186]: 2025-10-11 03:38:01.955947536 +0000 UTC m=+0.168854941 container attach bffffc6b405c9abb06cfd795a9ba3a0132c0aa7a2661aeebcc15e87bd0cdc10f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 23:38:01 np0005480824 quizzical_williamson[247214]: 167 167
Oct 10 23:38:01 np0005480824 podman[247186]: 2025-10-11 03:38:01.960878963 +0000 UTC m=+0.173786398 container died bffffc6b405c9abb06cfd795a9ba3a0132c0aa7a2661aeebcc15e87bd0cdc10f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 10 23:38:01 np0005480824 systemd[1]: libpod-bffffc6b405c9abb06cfd795a9ba3a0132c0aa7a2661aeebcc15e87bd0cdc10f.scope: Deactivated successfully.
Oct 10 23:38:01 np0005480824 systemd[1]: var-lib-containers-storage-overlay-040476d00181300b0d1fa586e2a5a89f1b1a4212f4728b6de6afff33f23967fe-merged.mount: Deactivated successfully.
Oct 10 23:38:02 np0005480824 podman[247186]: 2025-10-11 03:38:02.016203003 +0000 UTC m=+0.229110438 container remove bffffc6b405c9abb06cfd795a9ba3a0132c0aa7a2661aeebcc15e87bd0cdc10f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_williamson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:38:02 np0005480824 systemd[1]: libpod-conmon-bffffc6b405c9abb06cfd795a9ba3a0132c0aa7a2661aeebcc15e87bd0cdc10f.scope: Deactivated successfully.
Oct 10 23:38:02 np0005480824 python3.9[247211]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:02 np0005480824 podman[247263]: 2025-10-11 03:38:02.196239366 +0000 UTC m=+0.054783270 container create dbec0feaa29e8154e20c97581459f5c307da20a0a161ee7a66c330be2123ac2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:38:02 np0005480824 systemd[1]: Started libpod-conmon-dbec0feaa29e8154e20c97581459f5c307da20a0a161ee7a66c330be2123ac2e.scope.
Oct 10 23:38:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:02 np0005480824 podman[247263]: 2025-10-11 03:38:02.169055376 +0000 UTC m=+0.027599370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:38:02 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:38:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65f67a9c36d7acbcab87e99bf7885747aac37ab3e33be4566028bba2e10efdf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:38:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65f67a9c36d7acbcab87e99bf7885747aac37ab3e33be4566028bba2e10efdf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:38:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65f67a9c36d7acbcab87e99bf7885747aac37ab3e33be4566028bba2e10efdf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:38:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65f67a9c36d7acbcab87e99bf7885747aac37ab3e33be4566028bba2e10efdf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:38:02 np0005480824 podman[247263]: 2025-10-11 03:38:02.288811332 +0000 UTC m=+0.147355256 container init dbec0feaa29e8154e20c97581459f5c307da20a0a161ee7a66c330be2123ac2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 10 23:38:02 np0005480824 podman[247263]: 2025-10-11 03:38:02.302323559 +0000 UTC m=+0.160867473 container start dbec0feaa29e8154e20c97581459f5c307da20a0a161ee7a66c330be2123ac2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 23:38:02 np0005480824 podman[247263]: 2025-10-11 03:38:02.306050547 +0000 UTC m=+0.164594471 container attach dbec0feaa29e8154e20c97581459f5c307da20a0a161ee7a66c330be2123ac2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:38:02 np0005480824 python3.9[247411]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:03 np0005480824 python3.9[247574]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]: {
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "osd_id": 0,
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "type": "bluestore"
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:    },
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "osd_id": 1,
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "type": "bluestore"
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:    },
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "osd_id": 2,
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:        "type": "bluestore"
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]:    }
Oct 10 23:38:03 np0005480824 suspicious_banach[247319]: }
Oct 10 23:38:03 np0005480824 systemd[1]: libpod-dbec0feaa29e8154e20c97581459f5c307da20a0a161ee7a66c330be2123ac2e.scope: Deactivated successfully.
Oct 10 23:38:03 np0005480824 systemd[1]: libpod-dbec0feaa29e8154e20c97581459f5c307da20a0a161ee7a66c330be2123ac2e.scope: Consumed 1.031s CPU time.
Oct 10 23:38:03 np0005480824 podman[247263]: 2025-10-11 03:38:03.374806933 +0000 UTC m=+1.233350897 container died dbec0feaa29e8154e20c97581459f5c307da20a0a161ee7a66c330be2123ac2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:38:03 np0005480824 systemd[1]: var-lib-containers-storage-overlay-65f67a9c36d7acbcab87e99bf7885747aac37ab3e33be4566028bba2e10efdf1-merged.mount: Deactivated successfully.
Oct 10 23:38:03 np0005480824 podman[247263]: 2025-10-11 03:38:03.448191358 +0000 UTC m=+1.306735262 container remove dbec0feaa29e8154e20c97581459f5c307da20a0a161ee7a66c330be2123ac2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:38:03 np0005480824 systemd[1]: libpod-conmon-dbec0feaa29e8154e20c97581459f5c307da20a0a161ee7a66c330be2123ac2e.scope: Deactivated successfully.
Oct 10 23:38:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:38:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:38:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:38:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:38:03 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 81e9eb31-11e6-41f5-96b2-ce21be774116 does not exist
Oct 10 23:38:03 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 4ae47749-58a1-43f6-b674-913613cff2b2 does not exist
Oct 10 23:38:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:38:04 np0005480824 python3.9[247805]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:38:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:38:04 np0005480824 python3.9[247957]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:05 np0005480824 python3.9[248109]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:06 np0005480824 python3.9[248261]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:06 np0005480824 python3.9[248413]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:07 np0005480824 python3.9[248565]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:08 np0005480824 python3.9[248717]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:38:09 np0005480824 python3.9[248869]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:09 np0005480824 python3.9[249021]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:10 np0005480824 podman[249173]: 2025-10-11 03:38:10.453356095 +0000 UTC m=+0.095117687 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 10 23:38:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:38:10.474 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:38:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:38:10.475 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:38:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:38:10.475 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:38:10 np0005480824 python3.9[249174]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:11 np0005480824 python3.9[249348]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:12 np0005480824 python3.9[249500]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:38:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:12 np0005480824 python3.9[249652]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 23:38:13 np0005480824 python3.9[249804]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 23:38:13 np0005480824 systemd[1]: Reloading.
Oct 10 23:38:14 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:38:14 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:38:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:38:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:15 np0005480824 python3.9[249991]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:38:15 np0005480824 python3.9[250144]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:38:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:16 np0005480824 python3.9[250297]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:38:17 np0005480824 podman[250450]: 2025-10-11 03:38:17.188689499 +0000 UTC m=+0.054291907 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 10 23:38:17 np0005480824 python3.9[250451]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:38:17 np0005480824 python3.9[250622]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:38:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:18 np0005480824 python3.9[250775]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:38:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:38:19 np0005480824 python3.9[250928]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:38:19 np0005480824 python3.9[251081]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 23:38:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:21 np0005480824 python3.9[251234]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:21 np0005480824 python3.9[251386]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:22 np0005480824 python3.9[251538]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:23 np0005480824 python3.9[251690]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:38:24 np0005480824 python3.9[251842]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:24 np0005480824 python3.9[251994]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:25 np0005480824 python3.9[252148]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:26 np0005480824 podman[252248]: 2025-10-11 03:38:26.027164378 +0000 UTC m=+0.075720141 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct 10 23:38:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:26 np0005480824 python3.9[252323]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:26 np0005480824 python3.9[252475]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:27 np0005480824 python3.9[252627]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:38:27
Oct 10 23:38:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:38:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:38:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'volumes']
Oct 10 23:38:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:38:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:38:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:38:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:38:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:38:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:38:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:38:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:38:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:38:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:38:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:38:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:38:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:38:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:38:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:38:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:38:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:38:28 np0005480824 python3.9[252779]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:28 np0005480824 python3.9[252932]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:38:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:32 np0005480824 podman[252957]: 2025-10-11 03:38:32.011335843 +0000 UTC m=+0.066303739 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 10 23:38:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:38:34 np0005480824 python3.9[253106]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 10 23:38:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:35 np0005480824 python3.9[253259]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 23:38:36 np0005480824 python3.9[253419]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 23:38:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:36 np0005480824 systemd-logind[782]: New session 52 of user zuul.
Oct 10 23:38:36 np0005480824 systemd[1]: Started Session 52 of User zuul.
Oct 10 23:38:37 np0005480824 systemd[1]: session-52.scope: Deactivated successfully.
Oct 10 23:38:37 np0005480824 systemd-logind[782]: Session 52 logged out. Waiting for processes to exit.
Oct 10 23:38:37 np0005480824 systemd-logind[782]: Removed session 52.
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:38:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:38:37 np0005480824 python3.9[253606]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:38:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:38:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3286 writes, 14K keys, 3286 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3286 writes, 3286 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1276 writes, 5548 keys, 1276 commit groups, 1.0 writes per commit group, ingest: 8.48 MB, 0.01 MB/s#012Interval WAL: 1276 writes, 1276 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    118.0      0.12              0.04         6    0.020       0      0       0.0       0.0#012  L6      1/0    7.15 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4    159.1    132.4      0.26              0.14         5    0.051     19K   2180       0.0       0.0#012 Sum      1/0    7.15 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4    108.8    127.8      0.38              0.18        11    0.034     19K   2180       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    131.4    134.8      0.20              0.10         6    0.033     12K   1448       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    159.1    132.4      0.26              0.14         5    0.051     19K   2180       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    122.0      0.11              0.04         5    0.023       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.014, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.03 MB/s read, 0.4 seconds#012Interval compaction: 0.03 GB write, 0.04 MB/s write, 0.03 GB read, 0.04 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5617dbc851f0#2 capacity: 308.00 MB usage: 1.28 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(83,1.09 MB,0.353855%) FilterBlock(12,63.48 KB,0.0201287%) IndexBlock(12,126.39 KB,0.0400741%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 10 23:38:38 np0005480824 python3.9[253728]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153917.3228843-1555-276963221493127/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:39 np0005480824 python3.9[253878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:38:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:38:39 np0005480824 python3.9[253954]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:40 np0005480824 python3.9[254104]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:38:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:40 np0005480824 podman[254200]: 2025-10-11 03:38:40.707529205 +0000 UTC m=+0.136050409 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Oct 10 23:38:40 np0005480824 python3.9[254241]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153919.6435153-1555-223685997619250/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:41 np0005480824 python3.9[254400]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:38:41 np0005480824 python3.9[254521]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153920.9564693-1555-121821279530815/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:42 np0005480824 python3.9[254671]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:38:43 np0005480824 python3.9[254792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153922.1486657-1555-205460380356615/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:38:44 np0005480824 python3.9[254944]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:44 np0005480824 python3.9[255096]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:38:45 np0005480824 python3.9[255250]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:38:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:46 np0005480824 python3.9[255403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:38:47 np0005480824 python3.9[255526]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760153925.8853908-1648-107508739864532/.source _original_basename=.468qye2o follow=False checksum=d14f0c56f96bb170334a2db2cd77357190693b9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 10 23:38:47 np0005480824 podman[255652]: 2025-10-11 03:38:47.746849696 +0000 UTC m=+0.058950138 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 10 23:38:47 np0005480824 python3.9[255688]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:38:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:48 np0005480824 python3.9[255847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:38:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:38:49 np0005480824 python3.9[255968]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153928.1182616-1674-32017006487443/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:49 np0005480824 python3.9[256118]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 23:38:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:50 np0005480824 python3.9[256239]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760153929.4359453-1689-189382081509948/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 23:38:51 np0005480824 python3.9[256392]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 10 23:38:52 np0005480824 python3.9[256544]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 23:38:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:53 np0005480824 python3[256696]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.057894) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153934057970, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1526, "num_deletes": 505, "total_data_size": 1989922, "memory_usage": 2029368, "flush_reason": "Manual Compaction"}
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153934076501, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1960249, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13478, "largest_seqno": 15003, "table_properties": {"data_size": 1953578, "index_size": 3362, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 16144, "raw_average_key_size": 18, "raw_value_size": 1938312, "raw_average_value_size": 2180, "num_data_blocks": 154, "num_entries": 889, "num_filter_entries": 889, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153799, "oldest_key_time": 1760153799, "file_creation_time": 1760153934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 18731 microseconds, and 10175 cpu microseconds.
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.076581) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1960249 bytes OK
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.076662) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.078713) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.078743) EVENT_LOG_v1 {"time_micros": 1760153934078733, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.078770) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1982191, prev total WAL file size 1982191, number of live WAL files 2.
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.080083) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1914KB)], [32(7322KB)]
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153934080161, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9458483, "oldest_snapshot_seqno": -1}
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3852 keys, 7491284 bytes, temperature: kUnknown
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153934138111, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7491284, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7463315, "index_size": 17235, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 94264, "raw_average_key_size": 24, "raw_value_size": 7391326, "raw_average_value_size": 1918, "num_data_blocks": 732, "num_entries": 3852, "num_filter_entries": 3852, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760153934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.138454) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7491284 bytes
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.139890) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.9 rd, 129.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.2 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(8.6) write-amplify(3.8) OK, records in: 4875, records dropped: 1023 output_compression: NoCompression
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.139928) EVENT_LOG_v1 {"time_micros": 1760153934139904, "job": 14, "event": "compaction_finished", "compaction_time_micros": 58061, "compaction_time_cpu_micros": 33448, "output_level": 6, "num_output_files": 1, "total_output_size": 7491284, "num_input_records": 4875, "num_output_records": 3852, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153934140542, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760153934142914, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.079991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.143116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.143125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.143128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.143131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:38:54 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:38:54.143134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:38:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:38:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:38:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:38:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:38:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:38:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:38:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:38:58 np0005480824 podman[256751]: 2025-10-11 03:38:58.680635963 +0000 UTC m=+1.736335852 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 10 23:38:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:39:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:02 np0005480824 podman[256710]: 2025-10-11 03:39:02.777631541 +0000 UTC m=+9.465536961 image pull 95311272d2962a6b8537a6d19b94bc44c5c3621a6e21a2e983fd64d147646bc9 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 10 23:39:02 np0005480824 podman[256789]: 2025-10-11 03:39:02.821679796 +0000 UTC m=+0.094459881 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:39:03 np0005480824 podman[256831]: 2025-10-11 03:39:03.017142822 +0000 UTC m=+0.076158042 container create 496cf2c6a410baa100fe0ea9fd6c8bb42fe073ff3b6903246928d7a1d9d82d47 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, org.label-schema.build-date=20251009)
Oct 10 23:39:03 np0005480824 podman[256831]: 2025-10-11 03:39:02.978071693 +0000 UTC m=+0.037086993 image pull 95311272d2962a6b8537a6d19b94bc44c5c3621a6e21a2e983fd64d147646bc9 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 10 23:39:03 np0005480824 python3[256696]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:39:04 np0005480824 python3.9[257093]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:39:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:39:04 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 3ff41892-cb08-48f1-87ca-be1a0c4eb0d4 does not exist
Oct 10 23:39:04 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 6d8f3263-cb2d-43ef-897e-d38bd5684bd7 does not exist
Oct 10 23:39:04 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 190c1147-ab8d-4ffb-9ca9-b125363fa3f2 does not exist
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:39:04 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:39:05 np0005480824 python3.9[257405]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 10 23:39:05 np0005480824 podman[257480]: 2025-10-11 03:39:05.459571242 +0000 UTC m=+0.079847529 container create b35397d834ad5af6b4b3b3bcc475729e568720f2dfba41f31acb1a3c17d17b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 10 23:39:05 np0005480824 systemd[1]: Started libpod-conmon-b35397d834ad5af6b4b3b3bcc475729e568720f2dfba41f31acb1a3c17d17b85.scope.
Oct 10 23:39:05 np0005480824 podman[257480]: 2025-10-11 03:39:05.424976908 +0000 UTC m=+0.045253255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:39:05 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:39:05 np0005480824 podman[257480]: 2025-10-11 03:39:05.559674695 +0000 UTC m=+0.179950992 container init b35397d834ad5af6b4b3b3bcc475729e568720f2dfba41f31acb1a3c17d17b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_zhukovsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:39:05 np0005480824 podman[257480]: 2025-10-11 03:39:05.570959951 +0000 UTC m=+0.191236248 container start b35397d834ad5af6b4b3b3bcc475729e568720f2dfba41f31acb1a3c17d17b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:39:05 np0005480824 podman[257480]: 2025-10-11 03:39:05.57773931 +0000 UTC m=+0.198015607 container attach b35397d834ad5af6b4b3b3bcc475729e568720f2dfba41f31acb1a3c17d17b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_zhukovsky, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:39:05 np0005480824 reverent_zhukovsky[257541]: 167 167
Oct 10 23:39:05 np0005480824 systemd[1]: libpod-b35397d834ad5af6b4b3b3bcc475729e568720f2dfba41f31acb1a3c17d17b85.scope: Deactivated successfully.
Oct 10 23:39:05 np0005480824 podman[257480]: 2025-10-11 03:39:05.584200592 +0000 UTC m=+0.204476879 container died b35397d834ad5af6b4b3b3bcc475729e568720f2dfba41f31acb1a3c17d17b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 10 23:39:05 np0005480824 systemd[1]: var-lib-containers-storage-overlay-08c7ca7e0966896f16ffebd88f4005486b88015e3d7e7d42de10a480e1c91ee6-merged.mount: Deactivated successfully.
Oct 10 23:39:05 np0005480824 podman[257480]: 2025-10-11 03:39:05.642911022 +0000 UTC m=+0.263187319 container remove b35397d834ad5af6b4b3b3bcc475729e568720f2dfba41f31acb1a3c17d17b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_zhukovsky, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:39:05 np0005480824 systemd[1]: libpod-conmon-b35397d834ad5af6b4b3b3bcc475729e568720f2dfba41f31acb1a3c17d17b85.scope: Deactivated successfully.
Oct 10 23:39:05 np0005480824 podman[257636]: 2025-10-11 03:39:05.90145804 +0000 UTC m=+0.082728975 container create 1eb844434a9587f8e2446c42f216b1ebce2b4dd1edc251790b51f8b786053f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:39:05 np0005480824 systemd[1]: Started libpod-conmon-1eb844434a9587f8e2446c42f216b1ebce2b4dd1edc251790b51f8b786053f76.scope.
Oct 10 23:39:05 np0005480824 podman[257636]: 2025-10-11 03:39:05.867687607 +0000 UTC m=+0.048958612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:39:05 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:39:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a24ac6853ead4de9f664e719ae999ff6c84b0c5793f19f0641fc92d5e63e9c6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:06 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a24ac6853ead4de9f664e719ae999ff6c84b0c5793f19f0641fc92d5e63e9c6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:06 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a24ac6853ead4de9f664e719ae999ff6c84b0c5793f19f0641fc92d5e63e9c6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:06 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a24ac6853ead4de9f664e719ae999ff6c84b0c5793f19f0641fc92d5e63e9c6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:06 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a24ac6853ead4de9f664e719ae999ff6c84b0c5793f19f0641fc92d5e63e9c6c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:06 np0005480824 podman[257636]: 2025-10-11 03:39:06.038277067 +0000 UTC m=+0.219548072 container init 1eb844434a9587f8e2446c42f216b1ebce2b4dd1edc251790b51f8b786053f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 10 23:39:06 np0005480824 podman[257636]: 2025-10-11 03:39:06.05333127 +0000 UTC m=+0.234602215 container start 1eb844434a9587f8e2446c42f216b1ebce2b4dd1edc251790b51f8b786053f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 10 23:39:06 np0005480824 podman[257636]: 2025-10-11 03:39:06.058186745 +0000 UTC m=+0.239457690 container attach 1eb844434a9587f8e2446c42f216b1ebce2b4dd1edc251790b51f8b786053f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:39:06 np0005480824 python3.9[257648]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 23:39:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:07 np0005480824 python3[257822]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 23:39:07 np0005480824 hungry_mendel[257658]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:39:07 np0005480824 hungry_mendel[257658]: --> relative data size: 1.0
Oct 10 23:39:07 np0005480824 hungry_mendel[257658]: --> All data devices are unavailable
Oct 10 23:39:07 np0005480824 podman[257636]: 2025-10-11 03:39:07.294625613 +0000 UTC m=+1.475896608 container died 1eb844434a9587f8e2446c42f216b1ebce2b4dd1edc251790b51f8b786053f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 23:39:07 np0005480824 systemd[1]: libpod-1eb844434a9587f8e2446c42f216b1ebce2b4dd1edc251790b51f8b786053f76.scope: Deactivated successfully.
Oct 10 23:39:07 np0005480824 systemd[1]: libpod-1eb844434a9587f8e2446c42f216b1ebce2b4dd1edc251790b51f8b786053f76.scope: Consumed 1.198s CPU time.
Oct 10 23:39:07 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a24ac6853ead4de9f664e719ae999ff6c84b0c5793f19f0641fc92d5e63e9c6c-merged.mount: Deactivated successfully.
Oct 10 23:39:07 np0005480824 podman[257636]: 2025-10-11 03:39:07.367282851 +0000 UTC m=+1.548553766 container remove 1eb844434a9587f8e2446c42f216b1ebce2b4dd1edc251790b51f8b786053f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:39:07 np0005480824 systemd[1]: libpod-conmon-1eb844434a9587f8e2446c42f216b1ebce2b4dd1edc251790b51f8b786053f76.scope: Deactivated successfully.
Oct 10 23:39:07 np0005480824 podman[257874]: 2025-10-11 03:39:07.406045752 +0000 UTC m=+0.082498000 container create 26619da4fa972b2b2b8df272a799dceac616417728cb0ea1160a898d8a7167a1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, config_id=edpm)
Oct 10 23:39:07 np0005480824 podman[257874]: 2025-10-11 03:39:07.362037208 +0000 UTC m=+0.038489526 image pull 95311272d2962a6b8537a6d19b94bc44c5c3621a6e21a2e983fd64d147646bc9 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 10 23:39:07 np0005480824 python3[257822]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct 10 23:39:08 np0005480824 podman[258162]: 2025-10-11 03:39:08.120687224 +0000 UTC m=+0.039648604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:39:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:39:10 np0005480824 podman[258162]: 2025-10-11 03:39:10.059893389 +0000 UTC m=+1.978854709 container create d342b6d35d38f51c99630a17d1a955dd943d08d3f047c08f57cc271a0956c32f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:39:10 np0005480824 systemd[1]: Started libpod-conmon-d342b6d35d38f51c99630a17d1a955dd943d08d3f047c08f57cc271a0956c32f.scope.
Oct 10 23:39:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:10 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:39:10 np0005480824 podman[258162]: 2025-10-11 03:39:10.345882476 +0000 UTC m=+2.264843856 container init d342b6d35d38f51c99630a17d1a955dd943d08d3f047c08f57cc271a0956c32f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 23:39:10 np0005480824 podman[258162]: 2025-10-11 03:39:10.357645393 +0000 UTC m=+2.276606733 container start d342b6d35d38f51c99630a17d1a955dd943d08d3f047c08f57cc271a0956c32f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:39:10 np0005480824 podman[258162]: 2025-10-11 03:39:10.361468083 +0000 UTC m=+2.280429423 container attach d342b6d35d38f51c99630a17d1a955dd943d08d3f047c08f57cc271a0956c32f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 10 23:39:10 np0005480824 epic_black[258231]: 167 167
Oct 10 23:39:10 np0005480824 systemd[1]: libpod-d342b6d35d38f51c99630a17d1a955dd943d08d3f047c08f57cc271a0956c32f.scope: Deactivated successfully.
Oct 10 23:39:10 np0005480824 podman[258162]: 2025-10-11 03:39:10.371764117 +0000 UTC m=+2.290725447 container died d342b6d35d38f51c99630a17d1a955dd943d08d3f047c08f57cc271a0956c32f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:39:10 np0005480824 systemd[1]: var-lib-containers-storage-overlay-b186f549e6cfb70ec771c7a5de1c550c0b6341beddfec56ebab4ed4cd1e5e452-merged.mount: Deactivated successfully.
Oct 10 23:39:10 np0005480824 python3.9[258227]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:39:10 np0005480824 podman[258162]: 2025-10-11 03:39:10.422296109 +0000 UTC m=+2.341257409 container remove d342b6d35d38f51c99630a17d1a955dd943d08d3f047c08f57cc271a0956c32f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 23:39:10 np0005480824 systemd[1]: libpod-conmon-d342b6d35d38f51c99630a17d1a955dd943d08d3f047c08f57cc271a0956c32f.scope: Deactivated successfully.
Oct 10 23:39:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:39:10.475 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:39:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:39:10.477 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:39:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:39:10.478 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:39:10 np0005480824 podman[258281]: 2025-10-11 03:39:10.660230863 +0000 UTC m=+0.052902029 container create e512574305f7dd611fa16e86104d4beaf6c0f4d09c5d9bf56413f475c5af2e62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_sutherland, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:39:10 np0005480824 systemd[1]: Started libpod-conmon-e512574305f7dd611fa16e86104d4beaf6c0f4d09c5d9bf56413f475c5af2e62.scope.
Oct 10 23:39:10 np0005480824 podman[258281]: 2025-10-11 03:39:10.634997198 +0000 UTC m=+0.027668394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:39:10 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:39:10 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8e2452dfa493c58a22749e7b14574117adcfb27fe799fb4694c72a0a74018c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:10 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8e2452dfa493c58a22749e7b14574117adcfb27fe799fb4694c72a0a74018c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:10 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8e2452dfa493c58a22749e7b14574117adcfb27fe799fb4694c72a0a74018c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:10 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8e2452dfa493c58a22749e7b14574117adcfb27fe799fb4694c72a0a74018c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:10 np0005480824 podman[258281]: 2025-10-11 03:39:10.769325217 +0000 UTC m=+0.161996373 container init e512574305f7dd611fa16e86104d4beaf6c0f4d09c5d9bf56413f475c5af2e62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:39:10 np0005480824 podman[258281]: 2025-10-11 03:39:10.777301635 +0000 UTC m=+0.169972771 container start e512574305f7dd611fa16e86104d4beaf6c0f4d09c5d9bf56413f475c5af2e62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 23:39:10 np0005480824 podman[258281]: 2025-10-11 03:39:10.781247728 +0000 UTC m=+0.173918864 container attach e512574305f7dd611fa16e86104d4beaf6c0f4d09c5d9bf56413f475c5af2e62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 10 23:39:10 np0005480824 podman[258303]: 2025-10-11 03:39:10.92673045 +0000 UTC m=+0.156886122 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 23:39:11 np0005480824 python3.9[258456]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]: {
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:    "0": [
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:        {
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "devices": [
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "/dev/loop3"
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            ],
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_name": "ceph_lv0",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_size": "21470642176",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "name": "ceph_lv0",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "tags": {
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.cluster_name": "ceph",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.crush_device_class": "",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.encrypted": "0",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.osd_id": "0",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.type": "block",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.vdo": "0"
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            },
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "type": "block",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "vg_name": "ceph_vg0"
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:        }
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:    ],
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:    "1": [
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:        {
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "devices": [
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "/dev/loop4"
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            ],
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_name": "ceph_lv1",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_size": "21470642176",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "name": "ceph_lv1",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "tags": {
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.cluster_name": "ceph",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.crush_device_class": "",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.encrypted": "0",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.osd_id": "1",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.type": "block",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.vdo": "0"
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            },
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "type": "block",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "vg_name": "ceph_vg1"
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:        }
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:    ],
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:    "2": [
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:        {
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "devices": [
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "/dev/loop5"
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            ],
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_name": "ceph_lv2",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_size": "21470642176",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "name": "ceph_lv2",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "tags": {
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.cluster_name": "ceph",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.crush_device_class": "",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.encrypted": "0",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.osd_id": "2",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.type": "block",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:                "ceph.vdo": "0"
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            },
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "type": "block",
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:            "vg_name": "ceph_vg2"
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:        }
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]:    ]
Oct 10 23:39:11 np0005480824 practical_sutherland[258298]: }
Oct 10 23:39:11 np0005480824 systemd[1]: libpod-e512574305f7dd611fa16e86104d4beaf6c0f4d09c5d9bf56413f475c5af2e62.scope: Deactivated successfully.
Oct 10 23:39:11 np0005480824 podman[258281]: 2025-10-11 03:39:11.626171292 +0000 UTC m=+1.018842468 container died e512574305f7dd611fa16e86104d4beaf6c0f4d09c5d9bf56413f475c5af2e62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_sutherland, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 10 23:39:11 np0005480824 systemd[1]: var-lib-containers-storage-overlay-bd8e2452dfa493c58a22749e7b14574117adcfb27fe799fb4694c72a0a74018c-merged.mount: Deactivated successfully.
Oct 10 23:39:11 np0005480824 podman[258281]: 2025-10-11 03:39:11.713143305 +0000 UTC m=+1.105814441 container remove e512574305f7dd611fa16e86104d4beaf6c0f4d09c5d9bf56413f475c5af2e62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_sutherland, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:39:11 np0005480824 systemd[1]: libpod-conmon-e512574305f7dd611fa16e86104d4beaf6c0f4d09c5d9bf56413f475c5af2e62.scope: Deactivated successfully.
Oct 10 23:39:12 np0005480824 python3.9[258684]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760153951.432959-1781-115854377119462/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 23:39:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:12 np0005480824 podman[258795]: 2025-10-11 03:39:12.515339872 +0000 UTC m=+0.060502369 container create e2bb4f33aac70bfbde3fffec58c0f1e68674d7e770e0da3ae27594a5141e5c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:39:12 np0005480824 systemd[1]: Started libpod-conmon-e2bb4f33aac70bfbde3fffec58c0f1e68674d7e770e0da3ae27594a5141e5c9d.scope.
Oct 10 23:39:12 np0005480824 podman[258795]: 2025-10-11 03:39:12.49448276 +0000 UTC m=+0.039645347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:39:12 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:39:12 np0005480824 podman[258795]: 2025-10-11 03:39:12.622982051 +0000 UTC m=+0.168144648 container init e2bb4f33aac70bfbde3fffec58c0f1e68674d7e770e0da3ae27594a5141e5c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hugle, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:39:12 np0005480824 podman[258795]: 2025-10-11 03:39:12.635254491 +0000 UTC m=+0.180416988 container start e2bb4f33aac70bfbde3fffec58c0f1e68674d7e770e0da3ae27594a5141e5c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 23:39:12 np0005480824 podman[258795]: 2025-10-11 03:39:12.64030342 +0000 UTC m=+0.185465967 container attach e2bb4f33aac70bfbde3fffec58c0f1e68674d7e770e0da3ae27594a5141e5c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hugle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 23:39:12 np0005480824 suspicious_hugle[258851]: 167 167
Oct 10 23:39:12 np0005480824 systemd[1]: libpod-e2bb4f33aac70bfbde3fffec58c0f1e68674d7e770e0da3ae27594a5141e5c9d.scope: Deactivated successfully.
Oct 10 23:39:12 np0005480824 podman[258795]: 2025-10-11 03:39:12.645318698 +0000 UTC m=+0.190481195 container died e2bb4f33aac70bfbde3fffec58c0f1e68674d7e770e0da3ae27594a5141e5c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hugle, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 10 23:39:12 np0005480824 systemd[1]: var-lib-containers-storage-overlay-467772c1ea832808c135290ca1659302a4850a0ffcee2f6a1dc954081a2d671f-merged.mount: Deactivated successfully.
Oct 10 23:39:12 np0005480824 podman[258795]: 2025-10-11 03:39:12.696043206 +0000 UTC m=+0.241205713 container remove e2bb4f33aac70bfbde3fffec58c0f1e68674d7e770e0da3ae27594a5141e5c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:39:12 np0005480824 systemd[1]: libpod-conmon-e2bb4f33aac70bfbde3fffec58c0f1e68674d7e770e0da3ae27594a5141e5c9d.scope: Deactivated successfully.
Oct 10 23:39:12 np0005480824 python3.9[258855]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 23:39:12 np0005480824 podman[258876]: 2025-10-11 03:39:12.926004391 +0000 UTC m=+0.061073002 container create bcd19f1740d7c7c0815970ed7a2e7490ec7e8bf66fe021bcf497f6a1f07167a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:39:12 np0005480824 systemd[1]: Reloading.
Oct 10 23:39:12 np0005480824 podman[258876]: 2025-10-11 03:39:12.895732567 +0000 UTC m=+0.030801208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:39:13 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:39:13 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:39:13 np0005480824 systemd[1]: Started libpod-conmon-bcd19f1740d7c7c0815970ed7a2e7490ec7e8bf66fe021bcf497f6a1f07167a2.scope.
Oct 10 23:39:13 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:39:13 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5142b2aa9687c0f53748f2d8b40a176a7357c8f676cb71f0ad2d90b9b33b1928/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:13 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5142b2aa9687c0f53748f2d8b40a176a7357c8f676cb71f0ad2d90b9b33b1928/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:13 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5142b2aa9687c0f53748f2d8b40a176a7357c8f676cb71f0ad2d90b9b33b1928/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:13 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5142b2aa9687c0f53748f2d8b40a176a7357c8f676cb71f0ad2d90b9b33b1928/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:13 np0005480824 podman[258876]: 2025-10-11 03:39:13.396527883 +0000 UTC m=+0.531596504 container init bcd19f1740d7c7c0815970ed7a2e7490ec7e8bf66fe021bcf497f6a1f07167a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 10 23:39:13 np0005480824 podman[258876]: 2025-10-11 03:39:13.410775178 +0000 UTC m=+0.545843799 container start bcd19f1740d7c7c0815970ed7a2e7490ec7e8bf66fe021bcf497f6a1f07167a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lewin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:39:13 np0005480824 podman[258876]: 2025-10-11 03:39:13.416104994 +0000 UTC m=+0.551173675 container attach bcd19f1740d7c7c0815970ed7a2e7490ec7e8bf66fe021bcf497f6a1f07167a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 23:39:14 np0005480824 python3.9[259006]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 23:39:14 np0005480824 systemd[1]: Reloading.
Oct 10 23:39:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:14 np0005480824 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 23:39:14 np0005480824 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 23:39:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]: {
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "osd_id": 0,
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "type": "bluestore"
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:    },
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "osd_id": 1,
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "type": "bluestore"
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:    },
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "osd_id": 2,
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:        "type": "bluestore"
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]:    }
Oct 10 23:39:14 np0005480824 cranky_lewin[258926]: }
Oct 10 23:39:14 np0005480824 systemd[1]: libpod-bcd19f1740d7c7c0815970ed7a2e7490ec7e8bf66fe021bcf497f6a1f07167a2.scope: Deactivated successfully.
Oct 10 23:39:14 np0005480824 podman[258876]: 2025-10-11 03:39:14.600053248 +0000 UTC m=+1.735121829 container died bcd19f1740d7c7c0815970ed7a2e7490ec7e8bf66fe021bcf497f6a1f07167a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 23:39:14 np0005480824 systemd[1]: libpod-bcd19f1740d7c7c0815970ed7a2e7490ec7e8bf66fe021bcf497f6a1f07167a2.scope: Consumed 1.157s CPU time.
Oct 10 23:39:14 np0005480824 systemd[1]: Starting nova_compute container...
Oct 10 23:39:14 np0005480824 systemd[1]: var-lib-containers-storage-overlay-5142b2aa9687c0f53748f2d8b40a176a7357c8f676cb71f0ad2d90b9b33b1928-merged.mount: Deactivated successfully.
Oct 10 23:39:14 np0005480824 podman[258876]: 2025-10-11 03:39:14.664610222 +0000 UTC m=+1.799678803 container remove bcd19f1740d7c7c0815970ed7a2e7490ec7e8bf66fe021bcf497f6a1f07167a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 10 23:39:14 np0005480824 systemd[1]: libpod-conmon-bcd19f1740d7c7c0815970ed7a2e7490ec7e8bf66fe021bcf497f6a1f07167a2.scope: Deactivated successfully.
Oct 10 23:39:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:39:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:39:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:39:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:39:14 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 446a1e79-b954-4c2d-aa03-db4776f5a68b does not exist
Oct 10 23:39:14 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev e50f4c26-d039-448b-9f2b-c454a1b804de does not exist
Oct 10 23:39:14 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:39:14 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42a58e981976c6d5b8580b48c1b06d786bf18347af19da33bdd3fab46f24c94/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:14 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42a58e981976c6d5b8580b48c1b06d786bf18347af19da33bdd3fab46f24c94/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:14 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42a58e981976c6d5b8580b48c1b06d786bf18347af19da33bdd3fab46f24c94/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:14 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42a58e981976c6d5b8580b48c1b06d786bf18347af19da33bdd3fab46f24c94/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:14 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42a58e981976c6d5b8580b48c1b06d786bf18347af19da33bdd3fab46f24c94/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:14 np0005480824 podman[259075]: 2025-10-11 03:39:14.786888477 +0000 UTC m=+0.141347777 container init 26619da4fa972b2b2b8df272a799dceac616417728cb0ea1160a898d8a7167a1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=edpm)
Oct 10 23:39:14 np0005480824 podman[259075]: 2025-10-11 03:39:14.797819765 +0000 UTC m=+0.152279025 container start 26619da4fa972b2b2b8df272a799dceac616417728cb0ea1160a898d8a7167a1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251009, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 23:39:14 np0005480824 podman[259075]: nova_compute
Oct 10 23:39:14 np0005480824 nova_compute[259103]: + sudo -E kolla_set_configs
Oct 10 23:39:14 np0005480824 systemd[1]: Started nova_compute container.
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Validating config file
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Copying service configuration files
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Deleting /etc/ceph
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Creating directory /etc/ceph
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /etc/ceph
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Writing out command to execute
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 23:39:14 np0005480824 nova_compute[259103]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 23:39:14 np0005480824 nova_compute[259103]: ++ cat /run_command
Oct 10 23:39:14 np0005480824 nova_compute[259103]: + CMD=nova-compute
Oct 10 23:39:14 np0005480824 nova_compute[259103]: + ARGS=
Oct 10 23:39:14 np0005480824 nova_compute[259103]: + sudo kolla_copy_cacerts
Oct 10 23:39:14 np0005480824 nova_compute[259103]: + [[ ! -n '' ]]
Oct 10 23:39:14 np0005480824 nova_compute[259103]: + . kolla_extend_start
Oct 10 23:39:14 np0005480824 nova_compute[259103]: + echo 'Running command: '\''nova-compute'\'''
Oct 10 23:39:14 np0005480824 nova_compute[259103]: Running command: 'nova-compute'
Oct 10 23:39:14 np0005480824 nova_compute[259103]: + umask 0022
Oct 10 23:39:14 np0005480824 nova_compute[259103]: + exec nova-compute
Oct 10 23:39:15 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:39:15 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:39:15 np0005480824 python3.9[259314]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:39:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:16 np0005480824 python3.9[259465]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:39:17 np0005480824 python3.9[259615]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 23:39:17 np0005480824 nova_compute[259103]: 2025-10-11 03:39:17.428 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 10 23:39:17 np0005480824 nova_compute[259103]: 2025-10-11 03:39:17.428 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 10 23:39:17 np0005480824 nova_compute[259103]: 2025-10-11 03:39:17.428 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 10 23:39:17 np0005480824 nova_compute[259103]: 2025-10-11 03:39:17.428 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct 10 23:39:17 np0005480824 nova_compute[259103]: 2025-10-11 03:39:17.588 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:39:17 np0005480824 nova_compute[259103]: 2025-10-11 03:39:17.627 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:39:18 np0005480824 podman[259723]: 2025-10-11 03:39:18.056220643 +0000 UTC m=+0.103430602 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.201 2 INFO nova.virt.driver [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct 10 23:39:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:18 np0005480824 python3.9[259791]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.359 2 INFO nova.compute.provider_config [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct 10 23:39:18 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 23:39:18 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.373 2 DEBUG oslo_concurrency.lockutils [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.373 2 DEBUG oslo_concurrency.lockutils [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.374 2 DEBUG oslo_concurrency.lockutils [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.374 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.374 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.374 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.375 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.375 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.375 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.375 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.375 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.376 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.376 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.376 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.376 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.376 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.377 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.377 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.378 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.378 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.378 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.379 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.379 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.379 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.380 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.380 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.380 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.380 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.380 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.381 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.381 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.381 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.381 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.381 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.382 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.382 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.382 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.382 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.383 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.383 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.383 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.383 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.383 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.384 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.384 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.384 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.384 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.384 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.385 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.385 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.385 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.385 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.386 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.386 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.386 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.386 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.387 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.387 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.387 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.387 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.387 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.388 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.388 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.388 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.388 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.388 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.388 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.389 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.389 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.389 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.389 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.389 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.390 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.390 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.390 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.390 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.390 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.391 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.391 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.391 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.391 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.391 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.392 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.392 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.392 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.392 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.392 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.393 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.393 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.393 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.393 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.393 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.394 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.394 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.394 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.394 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.394 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.394 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.395 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.395 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.395 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.395 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.395 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.396 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.396 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.396 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.396 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.396 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.397 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.397 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.397 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.397 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.397 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.397 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.398 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.398 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.398 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.398 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.398 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.399 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.399 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.399 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.399 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.399 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.399 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.400 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.400 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.400 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.400 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.400 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.401 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.401 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.401 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.401 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.401 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.402 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.402 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.402 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.402 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.402 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.402 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.403 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.403 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.403 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.403 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.403 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.404 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.404 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.404 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.404 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.404 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.405 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.405 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.405 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.405 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.405 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.405 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.406 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.406 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.406 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.406 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.406 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.407 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.407 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.407 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.407 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.407 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.408 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.408 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.408 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.408 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.408 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.409 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.409 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.409 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.409 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.409 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.410 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.410 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.410 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.410 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.411 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.411 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.411 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.411 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.411 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.412 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.412 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.412 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.412 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.412 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.413 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.413 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.413 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.413 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.413 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.414 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.414 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.414 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.414 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.415 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.415 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.415 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.415 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.416 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.416 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.416 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.416 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.417 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.417 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.417 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.417 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.417 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.418 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.418 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.418 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.418 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.418 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.419 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.419 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.419 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.419 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.420 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.420 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.420 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.420 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.420 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.421 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.421 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.421 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.421 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.421 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.422 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.422 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.422 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.422 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.423 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.423 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.423 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.423 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.424 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.424 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.424 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.424 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.425 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.425 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.425 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.425 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.425 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.426 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.426 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.426 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.426 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.426 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.427 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.427 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.427 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.427 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.427 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.428 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.428 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.428 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.428 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.428 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.429 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.429 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.429 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.429 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.429 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.430 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.430 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.430 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.430 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.430 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.431 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.431 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.431 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.431 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.431 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.431 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.432 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.432 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.432 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.432 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.432 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.433 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.433 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.433 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.433 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.433 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.434 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.434 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.434 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.434 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.434 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.434 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.435 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.435 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.435 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.435 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.435 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.436 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.436 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.436 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.436 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.436 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.437 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.437 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.437 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.437 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.437 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.437 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.438 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.438 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.438 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.438 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.438 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.439 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.439 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.439 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.439 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.439 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.439 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.440 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.440 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.440 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.440 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.440 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.441 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.441 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.441 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.441 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.441 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.442 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.442 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.442 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.442 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.442 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.442 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.443 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.443 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.443 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.443 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.443 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.444 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.444 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.444 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.444 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.444 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.445 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.445 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.445 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.445 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.445 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.445 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.446 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.446 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.446 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.446 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.447 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.447 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.447 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.447 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.447 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.448 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.448 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.448 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.448 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.448 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.449 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.449 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.449 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.449 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.449 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.450 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.450 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.450 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.450 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.450 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.451 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.451 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.451 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.451 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.451 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.451 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.452 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.452 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.452 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.452 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.452 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.453 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.453 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.453 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.453 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.453 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.454 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.454 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.454 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.454 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.454 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.455 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.455 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.455 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.455 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.455 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.455 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.456 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.456 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.456 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.456 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.456 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.457 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.457 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.457 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.457 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.457 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.458 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.458 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.458 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.458 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.458 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.458 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.459 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.459 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.459 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.459 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.459 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.460 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.460 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.460 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.460 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.460 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.460 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.461 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.461 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.461 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.461 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.461 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.462 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.462 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.462 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.462 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.462 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.463 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.463 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.463 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.463 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.463 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.464 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.464 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.464 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.464 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.464 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.465 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.465 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.465 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.465 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.465 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.466 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.466 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.466 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.466 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.466 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.467 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.467 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.467 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.467 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.467 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.468 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.468 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.468 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.468 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.468 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.469 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.469 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.469 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.469 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.469 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.470 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.470 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.470 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.470 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.470 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.471 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.471 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.471 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.471 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.471 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.471 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.472 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.472 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.472 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.472 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.472 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.473 2 WARNING oslo_config.cfg [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 10 23:39:18 np0005480824 nova_compute[259103]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 10 23:39:18 np0005480824 nova_compute[259103]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 10 23:39:18 np0005480824 nova_compute[259103]: and ``live_migration_inbound_addr`` respectively.
Oct 10 23:39:18 np0005480824 nova_compute[259103]: ).  Its value may be silently ignored in the future.#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.473 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.473 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.473 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.474 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.474 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.474 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.474 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.474 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.475 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.475 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.475 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.475 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.475 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.476 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.476 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.476 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.476 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.476 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.477 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.rbd_secret_uuid        = 92cfe4d4-4917-5be1-9d00-73758793a62b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.477 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.477 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.477 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.477 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.477 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.478 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.478 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.478 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.478 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.478 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.479 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.479 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.479 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.479 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.479 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.480 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.480 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.480 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.480 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.480 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.481 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.481 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.481 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.481 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.481 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.481 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.482 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.482 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.482 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.482 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.482 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.483 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.483 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.483 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.483 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.483 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.484 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.484 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.484 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.484 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.484 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.485 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.485 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.485 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.485 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.485 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.485 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.486 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.486 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.486 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.486 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.486 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.487 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.487 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.487 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.487 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.487 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.487 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.488 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.488 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.488 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.488 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.488 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.489 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.489 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.489 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.489 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.489 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.490 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.490 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.490 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.490 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.490 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.490 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.491 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.491 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.491 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.491 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.491 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.492 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.492 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.492 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.492 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.492 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.493 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.493 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.493 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.493 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.493 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.493 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.494 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.494 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.494 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.494 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.495 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.495 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.495 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.495 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.496 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.496 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.496 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.496 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.496 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.497 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.497 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.497 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.497 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.497 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.498 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.498 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.498 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.498 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.499 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.499 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.499 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.499 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.500 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.500 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.500 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.500 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.500 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.501 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.501 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.501 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.501 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.502 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.502 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.502 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.502 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.502 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.503 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.503 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.503 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.503 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.504 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.504 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.504 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.504 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.505 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.505 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.505 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.505 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.506 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.506 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.506 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.506 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.506 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.507 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.507 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.507 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.508 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.508 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.508 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.508 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.509 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.509 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.509 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.509 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.509 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.510 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.510 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.510 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.510 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.511 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.511 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.511 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.511 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.512 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.512 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.512 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.512 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.512 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.512 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.513 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.513 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.513 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.513 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.513 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.514 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.514 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.514 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.514 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.514 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.515 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.515 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.515 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.515 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.516 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.516 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.516 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.516 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.516 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.517 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.517 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.517 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.517 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.517 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.518 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.518 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.518 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.518 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.519 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.519 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.519 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.519 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.519 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.519 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.520 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.520 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.520 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.520 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.520 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.521 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.521 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.521 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.521 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.521 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.521 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.522 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.522 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.522 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.522 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.522 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.523 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.523 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.523 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.523 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.523 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.523 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.524 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.524 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.524 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.524 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.525 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.525 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.525 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.525 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.525 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.526 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.526 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.526 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.526 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.526 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.526 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.527 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.527 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.527 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.527 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.527 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.528 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.528 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.528 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.528 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.528 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.528 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.529 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.529 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.529 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.529 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.529 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.530 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.530 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.530 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.530 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.530 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.531 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.531 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.531 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.531 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.531 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.531 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.532 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.532 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.532 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.532 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.532 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.533 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.533 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.533 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.533 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.533 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.534 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.534 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.534 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.534 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.534 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.535 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.535 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.535 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.535 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.535 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.535 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.536 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.536 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.536 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.536 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.536 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.537 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.537 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.537 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.537 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.537 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.538 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.538 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.538 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.538 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.538 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.538 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.539 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.539 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.539 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.539 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.539 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.540 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.540 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.540 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.540 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.540 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.541 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.541 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.541 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.541 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.541 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.541 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.542 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.542 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.542 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.542 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.542 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.543 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.543 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.543 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.543 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.543 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.544 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.544 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.544 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.544 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.544 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.544 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.545 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.545 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.545 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.545 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.545 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.546 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.546 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.546 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.546 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.546 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.546 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.547 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.547 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.547 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.547 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.548 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.548 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.548 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.548 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.549 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.549 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.549 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.549 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.550 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.550 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.550 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.550 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.550 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.550 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.551 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.551 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.551 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.551 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.551 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.551 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.552 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.552 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.552 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.552 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.553 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.553 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.553 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.553 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.553 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.553 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.554 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.554 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.554 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.554 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.555 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.555 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.555 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.555 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.556 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.556 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.556 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.556 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.557 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.557 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.557 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.557 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.558 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.558 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.558 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.558 2 DEBUG oslo_service.service [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.560 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.576 2 DEBUG nova.virt.libvirt.host [None req-b0a41902-f5a5-40cd-b255-0a85260acc60 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.577 2 DEBUG nova.virt.libvirt.host [None req-b0a41902-f5a5-40cd-b255-0a85260acc60 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.577 2 DEBUG nova.virt.libvirt.host [None req-b0a41902-f5a5-40cd-b255-0a85260acc60 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.578 2 DEBUG nova.virt.libvirt.host [None req-b0a41902-f5a5-40cd-b255-0a85260acc60 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct 10 23:39:18 np0005480824 systemd[1]: Starting libvirt QEMU daemon...
Oct 10 23:39:18 np0005480824 systemd[1]: Started libvirt QEMU daemon.
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.722 2 DEBUG nova.virt.libvirt.host [None req-b0a41902-f5a5-40cd-b255-0a85260acc60 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4b57784130> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.726 2 DEBUG nova.virt.libvirt.host [None req-b0a41902-f5a5-40cd-b255-0a85260acc60 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4b57784130> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.727 2 INFO nova.virt.libvirt.driver [None req-b0a41902-f5a5-40cd-b255-0a85260acc60 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.745 2 WARNING nova.virt.libvirt.driver [None req-b0a41902-f5a5-40cd-b255-0a85260acc60 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct 10 23:39:18 np0005480824 nova_compute[259103]: 2025-10-11 03:39:18.746 2 DEBUG nova.virt.libvirt.volume.mount [None req-b0a41902-f5a5-40cd-b255-0a85260acc60 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct 10 23:39:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:39:19 np0005480824 python3.9[260019]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 23:39:19 np0005480824 systemd[1]: Stopping nova_compute container...
Oct 10 23:39:19 np0005480824 nova_compute[259103]: 2025-10-11 03:39:19.575 2 DEBUG oslo_concurrency.lockutils [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:39:19 np0005480824 nova_compute[259103]: 2025-10-11 03:39:19.576 2 DEBUG oslo_concurrency.lockutils [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:39:19 np0005480824 nova_compute[259103]: 2025-10-11 03:39:19.576 2 DEBUG oslo_concurrency.lockutils [None req-39b528b9-581b-485d-acaf-cf76151abe76 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:39:20 np0005480824 virtqemud[259861]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 10 23:39:20 np0005480824 systemd[1]: libpod-26619da4fa972b2b2b8df272a799dceac616417728cb0ea1160a898d8a7167a1.scope: Deactivated successfully.
Oct 10 23:39:20 np0005480824 virtqemud[259861]: hostname: compute-0
Oct 10 23:39:20 np0005480824 virtqemud[259861]: End of file while reading data: Input/output error
Oct 10 23:39:20 np0005480824 systemd[1]: libpod-26619da4fa972b2b2b8df272a799dceac616417728cb0ea1160a898d8a7167a1.scope: Consumed 3.148s CPU time.
Oct 10 23:39:20 np0005480824 podman[260031]: 2025-10-11 03:39:20.179846428 +0000 UTC m=+0.662414930 container died 26619da4fa972b2b2b8df272a799dceac616417728cb0ea1160a898d8a7167a1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251009)
Oct 10 23:39:20 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26619da4fa972b2b2b8df272a799dceac616417728cb0ea1160a898d8a7167a1-userdata-shm.mount: Deactivated successfully.
Oct 10 23:39:20 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c42a58e981976c6d5b8580b48c1b06d786bf18347af19da33bdd3fab46f24c94-merged.mount: Deactivated successfully.
Oct 10 23:39:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:20 np0005480824 podman[260031]: 2025-10-11 03:39:20.703616225 +0000 UTC m=+1.186184727 container cleanup 26619da4fa972b2b2b8df272a799dceac616417728cb0ea1160a898d8a7167a1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 10 23:39:20 np0005480824 podman[260031]: nova_compute
Oct 10 23:39:20 np0005480824 podman[260061]: nova_compute
Oct 10 23:39:20 np0005480824 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 10 23:39:20 np0005480824 systemd[1]: Stopped nova_compute container.
Oct 10 23:39:20 np0005480824 systemd[1]: Starting nova_compute container...
Oct 10 23:39:20 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:39:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42a58e981976c6d5b8580b48c1b06d786bf18347af19da33bdd3fab46f24c94/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42a58e981976c6d5b8580b48c1b06d786bf18347af19da33bdd3fab46f24c94/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42a58e981976c6d5b8580b48c1b06d786bf18347af19da33bdd3fab46f24c94/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42a58e981976c6d5b8580b48c1b06d786bf18347af19da33bdd3fab46f24c94/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42a58e981976c6d5b8580b48c1b06d786bf18347af19da33bdd3fab46f24c94/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:20 np0005480824 podman[260074]: 2025-10-11 03:39:20.961200333 +0000 UTC m=+0.126947017 container init 26619da4fa972b2b2b8df272a799dceac616417728cb0ea1160a898d8a7167a1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Oct 10 23:39:20 np0005480824 podman[260074]: 2025-10-11 03:39:20.967549742 +0000 UTC m=+0.133296426 container start 26619da4fa972b2b2b8df272a799dceac616417728cb0ea1160a898d8a7167a1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:39:20 np0005480824 podman[260074]: nova_compute
Oct 10 23:39:20 np0005480824 nova_compute[260089]: + sudo -E kolla_set_configs
Oct 10 23:39:20 np0005480824 systemd[1]: Started nova_compute container.
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Validating config file
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Copying service configuration files
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Deleting /etc/ceph
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Creating directory /etc/ceph
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /etc/ceph
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Writing out command to execute
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 23:39:21 np0005480824 nova_compute[260089]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 23:39:21 np0005480824 nova_compute[260089]: ++ cat /run_command
Oct 10 23:39:21 np0005480824 nova_compute[260089]: + CMD=nova-compute
Oct 10 23:39:21 np0005480824 nova_compute[260089]: + ARGS=
Oct 10 23:39:21 np0005480824 nova_compute[260089]: + sudo kolla_copy_cacerts
Oct 10 23:39:21 np0005480824 nova_compute[260089]: + [[ ! -n '' ]]
Oct 10 23:39:21 np0005480824 nova_compute[260089]: + . kolla_extend_start
Oct 10 23:39:21 np0005480824 nova_compute[260089]: Running command: 'nova-compute'
Oct 10 23:39:21 np0005480824 nova_compute[260089]: + echo 'Running command: '\''nova-compute'\'''
Oct 10 23:39:21 np0005480824 nova_compute[260089]: + umask 0022
Oct 10 23:39:21 np0005480824 nova_compute[260089]: + exec nova-compute
Oct 10 23:39:21 np0005480824 python3.9[260252]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 10 23:39:22 np0005480824 systemd[1]: Started libpod-conmon-496cf2c6a410baa100fe0ea9fd6c8bb42fe073ff3b6903246928d7a1d9d82d47.scope.
Oct 10 23:39:22 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:39:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b738212742782c58d41060edb72101fa9a61c9a1f1de967e160514197b32fb/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b738212742782c58d41060edb72101fa9a61c9a1f1de967e160514197b32fb/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b738212742782c58d41060edb72101fa9a61c9a1f1de967e160514197b32fb/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 10 23:39:22 np0005480824 podman[260277]: 2025-10-11 03:39:22.07577285 +0000 UTC m=+0.140414305 container init 496cf2c6a410baa100fe0ea9fd6c8bb42fe073ff3b6903246928d7a1d9d82d47 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:39:22 np0005480824 podman[260277]: 2025-10-11 03:39:22.084977666 +0000 UTC m=+0.149619111 container start 496cf2c6a410baa100fe0ea9fd6c8bb42fe073ff3b6903246928d7a1d9d82d47 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:39:22 np0005480824 python3.9[260252]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 10 23:39:22 np0005480824 nova_compute_init[260299]: INFO:nova_statedir:Applying nova statedir ownership
Oct 10 23:39:22 np0005480824 nova_compute_init[260299]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 10 23:39:22 np0005480824 nova_compute_init[260299]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 10 23:39:22 np0005480824 nova_compute_init[260299]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 10 23:39:22 np0005480824 nova_compute_init[260299]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 10 23:39:22 np0005480824 nova_compute_init[260299]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 10 23:39:22 np0005480824 nova_compute_init[260299]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 10 23:39:22 np0005480824 nova_compute_init[260299]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 10 23:39:22 np0005480824 nova_compute_init[260299]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 10 23:39:22 np0005480824 nova_compute_init[260299]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 10 23:39:22 np0005480824 nova_compute_init[260299]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 10 23:39:22 np0005480824 nova_compute_init[260299]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 10 23:39:22 np0005480824 nova_compute_init[260299]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 10 23:39:22 np0005480824 nova_compute_init[260299]: INFO:nova_statedir:Nova statedir ownership complete
Oct 10 23:39:22 np0005480824 systemd[1]: libpod-496cf2c6a410baa100fe0ea9fd6c8bb42fe073ff3b6903246928d7a1d9d82d47.scope: Deactivated successfully.
Oct 10 23:39:22 np0005480824 podman[260314]: 2025-10-11 03:39:22.193754103 +0000 UTC m=+0.027886159 container died 496cf2c6a410baa100fe0ea9fd6c8bb42fe073ff3b6903246928d7a1d9d82d47 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:39:22 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-496cf2c6a410baa100fe0ea9fd6c8bb42fe073ff3b6903246928d7a1d9d82d47-userdata-shm.mount: Deactivated successfully.
Oct 10 23:39:22 np0005480824 systemd[1]: var-lib-containers-storage-overlay-48b738212742782c58d41060edb72101fa9a61c9a1f1de967e160514197b32fb-merged.mount: Deactivated successfully.
Oct 10 23:39:22 np0005480824 podman[260314]: 2025-10-11 03:39:22.246623751 +0000 UTC m=+0.080755777 container cleanup 496cf2c6a410baa100fe0ea9fd6c8bb42fe073ff3b6903246928d7a1d9d82d47 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, config_id=edpm, container_name=nova_compute_init)
Oct 10 23:39:22 np0005480824 systemd[1]: libpod-conmon-496cf2c6a410baa100fe0ea9fd6c8bb42fe073ff3b6903246928d7a1d9d82d47.scope: Deactivated successfully.
Oct 10 23:39:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:22 np0005480824 systemd[1]: session-50.scope: Deactivated successfully.
Oct 10 23:39:22 np0005480824 systemd[1]: session-50.scope: Consumed 3min 1.302s CPU time.
Oct 10 23:39:22 np0005480824 systemd-logind[782]: Session 50 logged out. Waiting for processes to exit.
Oct 10 23:39:22 np0005480824 systemd-logind[782]: Removed session 50.
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.116 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.117 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.117 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.117 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.261 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.287 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.757 2 INFO nova.virt.driver [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.894 2 INFO nova.compute.provider_config [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.912 2 DEBUG oslo_concurrency.lockutils [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.912 2 DEBUG oslo_concurrency.lockutils [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.913 2 DEBUG oslo_concurrency.lockutils [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.913 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.913 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.913 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.914 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.914 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.914 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.914 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.915 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.915 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.915 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.915 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.915 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.916 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.916 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.916 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.916 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.917 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.917 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.917 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.917 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.918 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.918 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.918 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.919 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.919 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.919 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.919 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.919 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.920 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.920 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.920 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.920 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.921 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.921 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.921 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.921 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.921 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.922 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.922 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.922 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.922 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.923 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.923 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.923 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.923 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.924 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.924 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.924 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.925 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.925 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.925 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.925 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.926 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.926 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.926 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.926 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.927 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.927 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.927 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.927 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.927 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.928 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.928 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.928 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.928 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.929 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.929 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.929 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.929 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.930 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.930 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.930 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.930 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.931 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.931 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.931 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.931 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.932 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.932 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.932 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.932 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.933 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.933 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.933 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.933 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.933 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.933 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.933 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.934 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.934 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.934 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.934 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.934 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.934 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.934 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.935 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.935 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.935 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.935 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.935 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.935 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.935 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.936 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.936 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.936 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.936 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.936 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.936 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.936 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.937 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.937 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.937 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.937 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.937 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.937 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.937 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.938 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.938 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.938 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.938 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.938 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.938 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.938 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.939 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.939 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.939 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.939 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.939 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.939 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.939 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.940 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.940 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.940 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.940 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.940 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.941 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.941 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.941 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.941 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.941 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.941 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.941 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.941 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.942 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.942 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.942 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.942 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.942 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.943 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.943 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.943 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.943 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.943 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.943 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.943 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.944 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.944 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.944 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.944 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.944 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.944 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.945 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.945 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.945 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.945 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.945 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.945 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.945 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.946 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.946 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.946 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.946 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.946 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.946 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.946 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.947 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.947 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.947 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.947 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.947 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.947 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.947 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.948 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.948 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.948 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.948 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.948 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.948 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.949 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.949 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.949 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.949 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.949 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.949 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.949 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.950 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.950 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.950 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.950 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.950 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.951 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.951 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.951 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.951 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.951 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.951 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.952 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.952 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.952 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.952 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.952 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.952 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.953 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.953 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.953 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.953 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.953 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.953 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.954 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.954 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.954 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.954 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.955 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.955 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.955 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.955 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.955 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.955 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.956 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.956 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.956 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.956 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.956 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.956 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.957 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.957 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.957 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.957 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.957 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.958 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.958 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.958 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.958 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.958 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.959 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.959 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.959 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.959 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.959 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.960 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.960 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.960 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.960 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.960 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.960 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.961 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.961 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.961 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.961 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.961 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.962 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.962 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.962 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.962 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.962 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.963 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.963 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.963 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.963 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.963 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.964 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.964 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.964 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.964 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.964 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.964 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.965 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.965 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.965 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.965 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.965 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.965 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.966 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.966 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.966 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.966 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.966 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.966 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.966 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.967 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.967 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.967 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.967 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.967 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.967 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.968 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.968 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.968 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.968 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.968 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.968 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.968 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.969 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.969 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.969 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.969 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.969 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.969 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.969 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.970 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.970 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.970 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.970 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.970 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.970 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.970 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.971 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.971 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.971 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.971 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.971 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.971 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.971 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.972 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.972 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.972 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.972 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.972 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.972 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.973 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.973 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.973 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.973 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.973 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.973 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.973 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.974 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.974 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.974 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.974 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.974 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.974 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.974 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.975 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.975 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.975 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.975 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.975 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.975 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.975 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.976 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.976 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.976 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.976 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.976 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.977 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.977 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.977 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.977 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.977 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.977 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.978 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.978 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.978 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.978 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.978 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.979 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.979 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.979 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.979 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.979 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.980 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.980 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.980 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.980 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.980 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.980 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.981 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.981 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.981 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.981 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.981 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.981 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.981 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.982 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.982 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.982 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.982 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.982 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.982 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.982 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.983 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.983 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.983 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.983 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.983 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.983 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.983 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.984 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.984 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.984 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.984 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.984 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.984 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.984 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.985 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.985 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.985 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.985 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.985 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.985 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.985 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.986 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.986 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.986 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.986 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.986 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.986 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.986 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.987 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.987 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.987 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.987 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.987 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.987 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.988 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.988 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.988 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.988 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.988 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.988 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.988 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.989 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.989 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.989 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.989 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.989 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.989 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.989 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.990 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.990 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.990 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.990 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.990 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.990 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.990 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.991 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.991 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.991 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.991 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.991 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.991 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.991 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.992 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.992 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.992 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.992 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.992 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.992 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.992 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.993 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.993 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.993 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.993 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.993 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.993 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.993 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.994 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.994 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.994 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.994 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.994 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.994 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.995 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.995 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.995 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.995 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.995 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.995 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.995 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.996 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.996 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.996 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.996 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.996 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.996 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.996 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.997 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.997 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.997 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.997 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.997 2 WARNING oslo_config.cfg [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 10 23:39:23 np0005480824 nova_compute[260089]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 10 23:39:23 np0005480824 nova_compute[260089]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 10 23:39:23 np0005480824 nova_compute[260089]: and ``live_migration_inbound_addr`` respectively.
Oct 10 23:39:23 np0005480824 nova_compute[260089]: ).  Its value may be silently ignored in the future.#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.997 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.998 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.998 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.998 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.998 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.998 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.998 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.998 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.999 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.999 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.999 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.999 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.999 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:23 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.999 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:23.999 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.000 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.000 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.000 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.000 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.rbd_secret_uuid        = 92cfe4d4-4917-5be1-9d00-73758793a62b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.000 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.001 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.001 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.001 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.001 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.001 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.001 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.002 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.002 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.002 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.002 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.002 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.002 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.003 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.003 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.003 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.003 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.003 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.003 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.003 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.004 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.004 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.004 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.004 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.004 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.004 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.005 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.005 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.005 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.005 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.005 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.005 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.006 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.006 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.006 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.006 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.006 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.006 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.007 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.007 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.007 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.007 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.007 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.007 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.008 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.008 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.008 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.008 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.008 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.009 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.009 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.009 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.009 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.009 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.009 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.010 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.010 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.010 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.010 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.010 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.010 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.010 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.011 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.011 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.011 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.011 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.011 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.011 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.012 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.012 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.012 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.012 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.012 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.012 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.013 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.013 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.013 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.013 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.013 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.014 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.014 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.014 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.014 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.014 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.015 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.015 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.015 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.015 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.015 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.015 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.015 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.016 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.016 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.016 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.016 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.016 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.016 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.016 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.017 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.017 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.017 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.017 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.017 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.017 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.018 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.018 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.018 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.018 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.018 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.018 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.018 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.019 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.019 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.019 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.019 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.019 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.019 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.019 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.020 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.020 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.020 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.020 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.020 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.021 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.021 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.021 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.021 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.021 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.021 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.021 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.022 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.022 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.022 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.022 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.022 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.022 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.022 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.023 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.023 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.023 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.023 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.023 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.023 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.024 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.024 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.024 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.024 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.024 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.024 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.024 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.025 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.025 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.025 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.025 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.025 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.025 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.025 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.026 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.026 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.026 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.026 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.026 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.026 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.027 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.027 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.027 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.027 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.027 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.027 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.027 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.028 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.028 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.028 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.028 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.028 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.028 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.028 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.029 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.029 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.029 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.029 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.029 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.029 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.030 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.030 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.030 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.030 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.030 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.031 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.031 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.031 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.031 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.031 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.031 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.031 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.032 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.032 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.032 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.032 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.032 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.032 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.032 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.033 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.033 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.033 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.033 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.033 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.033 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.033 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.034 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.034 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.034 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.034 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.034 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.034 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.034 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.035 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.035 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.035 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.035 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.035 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.035 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.035 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.036 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.036 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.036 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.036 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.036 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.036 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.037 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.037 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.037 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.037 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.037 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.037 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.037 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.038 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.038 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.038 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.038 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.038 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.038 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.038 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.039 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.039 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.039 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.039 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.039 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.039 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.039 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.040 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.040 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.040 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.040 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.040 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.040 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.041 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.041 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.041 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.041 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.041 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.041 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.041 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.042 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.042 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.042 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.042 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.042 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.042 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.043 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.043 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.043 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.043 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.043 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.043 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.044 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.044 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.044 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.044 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.044 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.044 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.045 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.045 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.045 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.045 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.045 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.045 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.046 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.046 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.046 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.046 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.046 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.046 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.046 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.047 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.047 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.047 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.047 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.047 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.047 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.047 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.048 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.048 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.048 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.048 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.048 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.048 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.049 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.049 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.049 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.049 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.049 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.049 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.049 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.050 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.050 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.050 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.050 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.050 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.050 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.050 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.051 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.051 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.051 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.051 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.051 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.051 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.051 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.052 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.052 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.052 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.052 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.052 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.052 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.052 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.053 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.053 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.053 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.053 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.053 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.053 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.053 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.054 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.054 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.054 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.054 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.054 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.054 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.054 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.055 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.055 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.055 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.055 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.055 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.055 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.056 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.056 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.056 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.056 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.056 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.056 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.057 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.057 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.057 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.057 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.057 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.057 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.057 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.058 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.058 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.058 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.058 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.058 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.058 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.058 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.059 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.059 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.059 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.059 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.059 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.060 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.060 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.060 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.060 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.060 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.060 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.061 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.061 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.061 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.061 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.061 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.061 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.061 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.062 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.062 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.062 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.062 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.062 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.062 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.062 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.063 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.063 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.063 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.063 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.063 2 DEBUG oslo_service.service [None req-81f610ff-55c4-4330-a4c0-7d42d3a7775b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.064 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.082 2 DEBUG nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.082 2 DEBUG nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.083 2 DEBUG nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.083 2 DEBUG nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.095 2 DEBUG nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f271c123b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.098 2 DEBUG nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f271c123b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.099 2 INFO nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Connection event '1' reason 'None'#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.106 2 INFO nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Libvirt host capabilities <capabilities>
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <host>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <uuid>fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7</uuid>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <cpu>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <arch>x86_64</arch>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model>EPYC-Rome-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <vendor>AMD</vendor>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <microcode version='16777317'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <signature family='23' model='49' stepping='0'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <maxphysaddr mode='emulate' bits='40'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='x2apic'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='tsc-deadline'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='osxsave'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='hypervisor'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='tsc_adjust'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='spec-ctrl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='stibp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='arch-capabilities'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='ssbd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='cmp_legacy'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='topoext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='virt-ssbd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='lbrv'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='tsc-scale'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='vmcb-clean'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='pause-filter'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='pfthreshold'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='svme-addr-chk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='rdctl-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='skip-l1dfl-vmentry'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='mds-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature name='pschange-mc-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <pages unit='KiB' size='4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <pages unit='KiB' size='2048'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <pages unit='KiB' size='1048576'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </cpu>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <power_management>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <suspend_mem/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </power_management>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <iommu support='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <migration_features>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <live/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <uri_transports>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <uri_transport>tcp</uri_transport>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <uri_transport>rdma</uri_transport>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </uri_transports>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </migration_features>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <topology>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <cells num='1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <cell id='0'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:          <memory unit='KiB'>7864356</memory>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:          <pages unit='KiB' size='4'>1966089</pages>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:          <pages unit='KiB' size='2048'>0</pages>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:          <pages unit='KiB' size='1048576'>0</pages>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:          <distances>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:            <sibling id='0' value='10'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:          </distances>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:          <cpus num='8'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:          </cpus>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        </cell>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </cells>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </topology>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <cache>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </cache>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <secmodel>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model>selinux</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <doi>0</doi>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </secmodel>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <secmodel>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model>dac</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <doi>0</doi>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </secmodel>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </host>
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <guest>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <os_type>hvm</os_type>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <arch name='i686'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <wordsize>32</wordsize>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <domain type='qemu'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <domain type='kvm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </arch>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <features>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <pae/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <nonpae/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <acpi default='on' toggle='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <apic default='on' toggle='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <cpuselection/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <deviceboot/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <disksnapshot default='on' toggle='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <externalSnapshot/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </features>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </guest>
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <guest>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <os_type>hvm</os_type>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <arch name='x86_64'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <wordsize>64</wordsize>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <domain type='qemu'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <domain type='kvm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </arch>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <features>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <acpi default='on' toggle='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <apic default='on' toggle='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <cpuselection/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <deviceboot/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <disksnapshot default='on' toggle='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <externalSnapshot/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </features>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </guest>
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 
Oct 10 23:39:24 np0005480824 nova_compute[260089]: </capabilities>
Oct 10 23:39:24 np0005480824 nova_compute[260089]: #033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.112 2 WARNING nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.113 2 DEBUG nova.virt.libvirt.volume.mount [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.119 2 DEBUG nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.156 2 DEBUG nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 10 23:39:24 np0005480824 nova_compute[260089]: <domainCapabilities>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <path>/usr/libexec/qemu-kvm</path>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <domain>kvm</domain>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <arch>i686</arch>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <vcpu max='4096'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <iothreads supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <os supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <enum name='firmware'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <loader supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>rom</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>pflash</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='readonly'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>yes</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>no</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='secure'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>no</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </loader>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <cpu>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='host-passthrough' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='hostPassthroughMigratable'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>on</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>off</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='maximum' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='maximumMigratable'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>on</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>off</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='host-model' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <vendor>AMD</vendor>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='x2apic'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='tsc-deadline'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='hypervisor'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='tsc_adjust'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='spec-ctrl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='stibp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='arch-capabilities'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='ssbd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='cmp_legacy'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='overflow-recov'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='succor'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='ibrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='amd-ssbd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='virt-ssbd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='lbrv'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='tsc-scale'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='vmcb-clean'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='flushbyasid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='pause-filter'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='pfthreshold'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='svme-addr-chk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='rdctl-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='mds-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='pschange-mc-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='gds-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='rfds-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='disable' name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='custom' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cooperlake'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cooperlake-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cooperlake-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Dhyana-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Genoa'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amd-psfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='auto-ibrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='no-nested-data-bp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='null-sel-clr-base'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='stibp-always-on'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Genoa-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amd-psfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='auto-ibrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='no-nested-data-bp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='null-sel-clr-base'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='stibp-always-on'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Milan'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Milan-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Milan-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amd-psfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='no-nested-data-bp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='null-sel-clr-base'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='stibp-always-on'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='GraniteRapids'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='prefetchiti'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='GraniteRapids-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='prefetchiti'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='GraniteRapids-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10-128'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10-256'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10-512'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='prefetchiti'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v6'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v7'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='KnightsMill'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4fmaps'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4vnniw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512er'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512pf'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='KnightsMill-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4fmaps'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4vnniw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512er'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512pf'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G4-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tbm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G5-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tbm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SierraForest'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ne-convert'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cmpccxadd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SierraForest-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ne-convert'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cmpccxadd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='athlon'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='athlon-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='core2duo'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='core2duo-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='coreduo'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='coreduo-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='n270'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='n270-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='phenom'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='phenom-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <memoryBacking supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <enum name='sourceType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>file</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>anonymous</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>memfd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </memoryBacking>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <disk supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='diskDevice'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>disk</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>cdrom</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>floppy</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>lun</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='bus'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>fdc</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>scsi</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>usb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>sata</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-non-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <graphics supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vnc</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>egl-headless</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>dbus</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </graphics>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <video supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='modelType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vga</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>cirrus</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>none</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>bochs</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>ramfb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <hostdev supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='mode'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>subsystem</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='startupPolicy'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>default</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>mandatory</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>requisite</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>optional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='subsysType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>usb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>pci</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>scsi</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='capsType'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='pciBackend'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </hostdev>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <rng supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-non-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendModel'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>random</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>egd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>builtin</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <filesystem supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='driverType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>path</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>handle</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtiofs</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </filesystem>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <tpm supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>tpm-tis</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>tpm-crb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendModel'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>emulator</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>external</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendVersion'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>2.0</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </tpm>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <redirdev supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='bus'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>usb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </redirdev>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <channel supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>pty</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>unix</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </channel>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <crypto supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>qemu</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendModel'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>builtin</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </crypto>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <interface supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>default</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>passt</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <panic supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>isa</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>hyperv</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </panic>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <gic supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <vmcoreinfo supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <genid supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <backingStoreInput supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <backup supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <async-teardown supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <ps2 supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <sev supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <sgx supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <hyperv supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='features'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>relaxed</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vapic</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>spinlocks</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vpindex</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>runtime</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>synic</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>stimer</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>reset</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vendor_id</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>frequencies</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>reenlightenment</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>tlbflush</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>ipi</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>avic</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>emsr_bitmap</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>xmm_input</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </hyperv>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <launchSecurity supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:39:24 np0005480824 nova_compute[260089]: </domainCapabilities>
Oct 10 23:39:24 np0005480824 nova_compute[260089]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.171 2 DEBUG nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 10 23:39:24 np0005480824 nova_compute[260089]: <domainCapabilities>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <path>/usr/libexec/qemu-kvm</path>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <domain>kvm</domain>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <arch>i686</arch>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <vcpu max='240'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <iothreads supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <os supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <enum name='firmware'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <loader supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>rom</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>pflash</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='readonly'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>yes</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>no</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='secure'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>no</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </loader>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <cpu>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='host-passthrough' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='hostPassthroughMigratable'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>on</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>off</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='maximum' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='maximumMigratable'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>on</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>off</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='host-model' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <vendor>AMD</vendor>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='x2apic'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='tsc-deadline'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='hypervisor'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='tsc_adjust'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='spec-ctrl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='stibp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='arch-capabilities'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='ssbd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='cmp_legacy'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='overflow-recov'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='succor'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='ibrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='amd-ssbd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='virt-ssbd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='lbrv'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='tsc-scale'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='vmcb-clean'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='flushbyasid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='pause-filter'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='pfthreshold'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='svme-addr-chk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='rdctl-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='mds-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='pschange-mc-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='gds-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='rfds-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='disable' name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='custom' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cooperlake'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cooperlake-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cooperlake-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Dhyana-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Genoa'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amd-psfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='auto-ibrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='no-nested-data-bp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='null-sel-clr-base'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='stibp-always-on'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Genoa-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amd-psfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='auto-ibrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='no-nested-data-bp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='null-sel-clr-base'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='stibp-always-on'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Milan'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Milan-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Milan-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amd-psfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='no-nested-data-bp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='null-sel-clr-base'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='stibp-always-on'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='GraniteRapids'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='prefetchiti'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='GraniteRapids-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='prefetchiti'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='GraniteRapids-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10-128'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10-256'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10-512'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='prefetchiti'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v6'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v7'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='KnightsMill'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4fmaps'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4vnniw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512er'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512pf'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='KnightsMill-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4fmaps'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4vnniw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512er'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512pf'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G4-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tbm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G5-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tbm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SierraForest'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ne-convert'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cmpccxadd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SierraForest-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ne-convert'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cmpccxadd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='athlon'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='athlon-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='core2duo'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='core2duo-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='coreduo'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='coreduo-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='n270'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='n270-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='phenom'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='phenom-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <memoryBacking supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <enum name='sourceType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>file</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>anonymous</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>memfd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </memoryBacking>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <disk supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='diskDevice'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>disk</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>cdrom</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>floppy</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>lun</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='bus'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>ide</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>fdc</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>scsi</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>usb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>sata</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-non-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <graphics supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vnc</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>egl-headless</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>dbus</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </graphics>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <video supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='modelType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vga</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>cirrus</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>none</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>bochs</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>ramfb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <hostdev supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='mode'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>subsystem</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='startupPolicy'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>default</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>mandatory</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>requisite</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>optional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='subsysType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>usb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>pci</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>scsi</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='capsType'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='pciBackend'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </hostdev>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <rng supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-non-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendModel'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>random</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>egd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>builtin</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <filesystem supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='driverType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>path</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>handle</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtiofs</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </filesystem>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <tpm supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>tpm-tis</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>tpm-crb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendModel'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>emulator</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>external</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendVersion'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>2.0</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </tpm>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <redirdev supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='bus'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>usb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </redirdev>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <channel supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>pty</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>unix</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </channel>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <crypto supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>qemu</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendModel'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>builtin</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </crypto>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <interface supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>default</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>passt</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <panic supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>isa</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>hyperv</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </panic>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <gic supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <vmcoreinfo supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <genid supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <backingStoreInput supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <backup supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <async-teardown supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <ps2 supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <sev supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <sgx supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <hyperv supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='features'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>relaxed</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vapic</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>spinlocks</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vpindex</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>runtime</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>synic</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>stimer</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>reset</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vendor_id</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>frequencies</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>reenlightenment</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>tlbflush</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>ipi</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>avic</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>emsr_bitmap</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>xmm_input</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </hyperv>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <launchSecurity supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:39:24 np0005480824 nova_compute[260089]: </domainCapabilities>
Oct 10 23:39:24 np0005480824 nova_compute[260089]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.223 2 DEBUG nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.228 2 DEBUG nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 10 23:39:24 np0005480824 nova_compute[260089]: <domainCapabilities>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <path>/usr/libexec/qemu-kvm</path>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <domain>kvm</domain>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <arch>x86_64</arch>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <vcpu max='4096'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <iothreads supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <os supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <enum name='firmware'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>efi</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <loader supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>rom</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>pflash</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='readonly'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>yes</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>no</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='secure'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>yes</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>no</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </loader>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <cpu>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='host-passthrough' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='hostPassthroughMigratable'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>on</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>off</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='maximum' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='maximumMigratable'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>on</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>off</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='host-model' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <vendor>AMD</vendor>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='x2apic'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='tsc-deadline'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='hypervisor'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='tsc_adjust'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='spec-ctrl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='stibp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='arch-capabilities'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='ssbd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='cmp_legacy'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='overflow-recov'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='succor'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='ibrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='amd-ssbd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='virt-ssbd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='lbrv'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='tsc-scale'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='vmcb-clean'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='flushbyasid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='pause-filter'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='pfthreshold'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='svme-addr-chk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='rdctl-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='mds-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='pschange-mc-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='gds-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='rfds-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='disable' name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='custom' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cooperlake'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cooperlake-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cooperlake-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Dhyana-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Genoa'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amd-psfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='auto-ibrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='no-nested-data-bp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='null-sel-clr-base'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='stibp-always-on'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Genoa-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amd-psfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='auto-ibrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='no-nested-data-bp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='null-sel-clr-base'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='stibp-always-on'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Milan'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Milan-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Milan-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amd-psfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='no-nested-data-bp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='null-sel-clr-base'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='stibp-always-on'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='GraniteRapids'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='prefetchiti'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='GraniteRapids-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='prefetchiti'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='GraniteRapids-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10-128'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10-256'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10-512'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='prefetchiti'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v6'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v7'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='KnightsMill'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4fmaps'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4vnniw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512er'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512pf'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='KnightsMill-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4fmaps'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4vnniw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512er'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512pf'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G4-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tbm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G5-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tbm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SierraForest'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ne-convert'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cmpccxadd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SierraForest-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ne-convert'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cmpccxadd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='athlon'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='athlon-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='core2duo'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='core2duo-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='coreduo'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='coreduo-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='n270'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='n270-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='phenom'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='phenom-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <memoryBacking supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <enum name='sourceType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>file</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>anonymous</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>memfd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </memoryBacking>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <disk supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='diskDevice'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>disk</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>cdrom</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>floppy</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>lun</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='bus'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>fdc</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>scsi</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>usb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>sata</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-non-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <graphics supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vnc</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>egl-headless</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>dbus</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </graphics>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <video supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='modelType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vga</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>cirrus</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>none</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>bochs</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>ramfb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <hostdev supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='mode'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>subsystem</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='startupPolicy'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>default</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>mandatory</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>requisite</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>optional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='subsysType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>usb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>pci</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>scsi</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='capsType'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='pciBackend'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </hostdev>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <rng supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-non-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendModel'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>random</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>egd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>builtin</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <filesystem supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='driverType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>path</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>handle</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtiofs</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </filesystem>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <tpm supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>tpm-tis</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>tpm-crb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendModel'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>emulator</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>external</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendVersion'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>2.0</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </tpm>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <redirdev supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='bus'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>usb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </redirdev>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <channel supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>pty</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>unix</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </channel>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <crypto supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>qemu</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendModel'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>builtin</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </crypto>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <interface supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>default</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>passt</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <panic supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>isa</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>hyperv</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </panic>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <gic supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <vmcoreinfo supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <genid supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <backingStoreInput supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <backup supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <async-teardown supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <ps2 supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <sev supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <sgx supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <hyperv supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='features'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>relaxed</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vapic</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>spinlocks</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vpindex</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>runtime</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>synic</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>stimer</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>reset</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vendor_id</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>frequencies</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>reenlightenment</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>tlbflush</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>ipi</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>avic</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>emsr_bitmap</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>xmm_input</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </hyperv>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <launchSecurity supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:39:24 np0005480824 nova_compute[260089]: </domainCapabilities>
Oct 10 23:39:24 np0005480824 nova_compute[260089]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.295 2 DEBUG nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 10 23:39:24 np0005480824 nova_compute[260089]: <domainCapabilities>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <path>/usr/libexec/qemu-kvm</path>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <domain>kvm</domain>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <arch>x86_64</arch>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <vcpu max='240'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <iothreads supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <os supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <enum name='firmware'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <loader supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>rom</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>pflash</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='readonly'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>yes</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>no</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='secure'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>no</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </loader>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <cpu>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='host-passthrough' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='hostPassthroughMigratable'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>on</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>off</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='maximum' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='maximumMigratable'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>on</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>off</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='host-model' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <vendor>AMD</vendor>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='x2apic'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='tsc-deadline'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='hypervisor'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='tsc_adjust'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='spec-ctrl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='stibp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='arch-capabilities'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='ssbd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='cmp_legacy'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='overflow-recov'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='succor'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='ibrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='amd-ssbd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='virt-ssbd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='lbrv'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='tsc-scale'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='vmcb-clean'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='flushbyasid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='pause-filter'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='pfthreshold'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='svme-addr-chk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='rdctl-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='mds-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='pschange-mc-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='gds-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='require' name='rfds-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <feature policy='disable' name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <mode name='custom' supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Broadwell-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cascadelake-Server-v5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cooperlake'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cooperlake-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Cooperlake-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Denverton-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Dhyana-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Genoa'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amd-psfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='auto-ibrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='no-nested-data-bp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='null-sel-clr-base'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='stibp-always-on'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Genoa-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amd-psfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='auto-ibrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='no-nested-data-bp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='null-sel-clr-base'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='stibp-always-on'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Milan'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Milan-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Milan-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amd-psfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='no-nested-data-bp'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='null-sel-clr-base'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='stibp-always-on'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-Rome-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='EPYC-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='GraniteRapids'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='prefetchiti'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='GraniteRapids-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='prefetchiti'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='GraniteRapids-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10-128'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10-256'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx10-512'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='prefetchiti'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Haswell-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-noTSX'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v6'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Icelake-Server-v7'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='IvyBridge-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='KnightsMill'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4fmaps'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4vnniw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512er'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512pf'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='KnightsMill-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4fmaps'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-4vnniw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512er'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512pf'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G4-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tbm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Opteron_G5-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fma4'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tbm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xop'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SapphireRapids-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='amx-tile'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-bf16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-fp16'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512-vpopcntdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bitalg'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vbmi2'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrc'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fzrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='la57'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='taa-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='tsx-ldtrk'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xfd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SierraForest'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ne-convert'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cmpccxadd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='SierraForest-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ifma'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-ne-convert'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx-vnni-int8'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='bus-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cmpccxadd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fbsdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='fsrs'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ibrs-all'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mcdt-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pbrsb-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='psdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='sbdr-ssdp-no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='serialize'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vaes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='vpclmulqdq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Client-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='hle'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='rtm'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Skylake-Server-v5'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512bw'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512cd'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512dq'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512f'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='avx512vl'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='invpcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pcid'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='pku'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='mpx'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v2'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v3'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='core-capability'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='split-lock-detect'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='Snowridge-v4'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='cldemote'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='erms'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='gfni'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdir64b'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='movdiri'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='xsaves'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='athlon'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='athlon-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='core2duo'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='core2duo-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='coreduo'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='coreduo-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='n270'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='n270-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='ss'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='phenom'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <blockers model='phenom-v1'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnow'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <feature name='3dnowext'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </blockers>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </mode>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <memoryBacking supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <enum name='sourceType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>file</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>anonymous</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <value>memfd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </memoryBacking>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <disk supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='diskDevice'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>disk</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>cdrom</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>floppy</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>lun</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='bus'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>ide</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>fdc</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>scsi</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>usb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>sata</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-non-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <graphics supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vnc</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>egl-headless</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>dbus</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </graphics>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <video supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='modelType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vga</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>cirrus</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>none</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>bochs</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>ramfb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <hostdev supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='mode'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>subsystem</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='startupPolicy'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>default</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>mandatory</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>requisite</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>optional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='subsysType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>usb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>pci</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>scsi</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='capsType'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='pciBackend'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </hostdev>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <rng supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtio-non-transitional</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendModel'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>random</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>egd</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>builtin</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <filesystem supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='driverType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>path</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>handle</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>virtiofs</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </filesystem>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <tpm supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>tpm-tis</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>tpm-crb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendModel'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>emulator</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>external</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendVersion'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>2.0</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </tpm>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <redirdev supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='bus'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>usb</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </redirdev>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <channel supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>pty</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>unix</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </channel>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <crypto supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='type'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>qemu</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendModel'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>builtin</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </crypto>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <interface supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='backendType'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>default</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>passt</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <panic supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='model'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>isa</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>hyperv</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </panic>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <gic supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <vmcoreinfo supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <genid supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <backingStoreInput supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <backup supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <async-teardown supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <ps2 supported='yes'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <sev supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <sgx supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <hyperv supported='yes'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      <enum name='features'>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>relaxed</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vapic</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>spinlocks</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vpindex</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>runtime</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>synic</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>stimer</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>reset</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>vendor_id</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>frequencies</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>reenlightenment</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>tlbflush</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>ipi</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>avic</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>emsr_bitmap</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:        <value>xmm_input</value>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:      </enum>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    </hyperv>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:    <launchSecurity supported='no'/>
Oct 10 23:39:24 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:39:24 np0005480824 nova_compute[260089]: </domainCapabilities>
Oct 10 23:39:24 np0005480824 nova_compute[260089]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.348 2 DEBUG nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.349 2 INFO nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Secure Boot support detected#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.351 2 INFO nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.351 2 INFO nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.361 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.393 2 INFO nova.virt.node [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Determined node identity 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 from /var/lib/nova/compute_id#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.413 2 WARNING nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Compute nodes ['6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.443 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.479 2 WARNING nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.480 2 DEBUG oslo_concurrency.lockutils [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.480 2 DEBUG oslo_concurrency.lockutils [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.480 2 DEBUG oslo_concurrency.lockutils [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.480 2 DEBUG nova.compute.resource_tracker [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.481 2 DEBUG oslo_concurrency.processutils [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:39:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:39:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266026694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:39:24 np0005480824 nova_compute[260089]: 2025-10-11 03:39:24.902 2 DEBUG oslo_concurrency.processutils [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:39:24 np0005480824 systemd[1]: Starting libvirt nodedev daemon...
Oct 10 23:39:24 np0005480824 systemd[1]: Started libvirt nodedev daemon.
Oct 10 23:39:25 np0005480824 nova_compute[260089]: 2025-10-11 03:39:25.208 2 WARNING nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:39:25 np0005480824 nova_compute[260089]: 2025-10-11 03:39:25.209 2 DEBUG nova.compute.resource_tracker [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5176MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:39:25 np0005480824 nova_compute[260089]: 2025-10-11 03:39:25.209 2 DEBUG oslo_concurrency.lockutils [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:39:25 np0005480824 nova_compute[260089]: 2025-10-11 03:39:25.210 2 DEBUG oslo_concurrency.lockutils [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:39:25 np0005480824 nova_compute[260089]: 2025-10-11 03:39:25.223 2 WARNING nova.compute.resource_tracker [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] No compute node record for compute-0.ctlplane.example.com:6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 could not be found.#033[00m
Oct 10 23:39:25 np0005480824 nova_compute[260089]: 2025-10-11 03:39:25.245 2 INFO nova.compute.resource_tracker [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72#033[00m
Oct 10 23:39:25 np0005480824 nova_compute[260089]: 2025-10-11 03:39:25.313 2 DEBUG nova.compute.resource_tracker [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:39:25 np0005480824 nova_compute[260089]: 2025-10-11 03:39:25.313 2 DEBUG nova.compute.resource_tracker [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:39:26 np0005480824 nova_compute[260089]: 2025-10-11 03:39:26.231 2 INFO nova.scheduler.client.report [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [req-c3393fb6-ae55-4c5d-ac37-3ff4eaf0ff8f] Created resource provider record via placement API for resource provider with UUID 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 and name compute-0.ctlplane.example.com.#033[00m
Oct 10 23:39:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:26 np0005480824 nova_compute[260089]: 2025-10-11 03:39:26.615 2 DEBUG oslo_concurrency.processutils [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:39:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:39:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/616131226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:39:27 np0005480824 nova_compute[260089]: 2025-10-11 03:39:27.110 2 DEBUG oslo_concurrency.processutils [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:39:27 np0005480824 nova_compute[260089]: 2025-10-11 03:39:27.117 2 DEBUG nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 10 23:39:27 np0005480824 nova_compute[260089]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct 10 23:39:27 np0005480824 nova_compute[260089]: 2025-10-11 03:39:27.118 2 INFO nova.virt.libvirt.host [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct 10 23:39:27 np0005480824 nova_compute[260089]: 2025-10-11 03:39:27.120 2 DEBUG nova.compute.provider_tree [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Updating inventory in ProviderTree for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 23:39:27 np0005480824 nova_compute[260089]: 2025-10-11 03:39:27.120 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:39:27 np0005480824 nova_compute[260089]: 2025-10-11 03:39:27.188 2 DEBUG nova.scheduler.client.report [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Updated inventory for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct 10 23:39:27 np0005480824 nova_compute[260089]: 2025-10-11 03:39:27.189 2 DEBUG nova.compute.provider_tree [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Updating resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 10 23:39:27 np0005480824 nova_compute[260089]: 2025-10-11 03:39:27.189 2 DEBUG nova.compute.provider_tree [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Updating inventory in ProviderTree for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 23:39:27 np0005480824 nova_compute[260089]: 2025-10-11 03:39:27.313 2 DEBUG nova.compute.provider_tree [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Updating resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 10 23:39:27 np0005480824 nova_compute[260089]: 2025-10-11 03:39:27.349 2 DEBUG nova.compute.resource_tracker [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:39:27 np0005480824 nova_compute[260089]: 2025-10-11 03:39:27.350 2 DEBUG oslo_concurrency.lockutils [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:39:27 np0005480824 nova_compute[260089]: 2025-10-11 03:39:27.350 2 DEBUG nova.service [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct 10 23:39:27 np0005480824 nova_compute[260089]: 2025-10-11 03:39:27.461 2 DEBUG nova.service [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct 10 23:39:27 np0005480824 nova_compute[260089]: 2025-10-11 03:39:27.462 2 DEBUG nova.servicegroup.drivers.db [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct 10 23:39:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:39:27
Oct 10 23:39:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:39:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:39:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'volumes']
Oct 10 23:39:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:39:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:39:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:39:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:39:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:39:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:39:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:39:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:39:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:39:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:39:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:39:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:39:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:39:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:39:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:39:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:39:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:39:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:39:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:33 np0005480824 podman[260460]: 2025-10-11 03:39:33.06770698 +0000 UTC m=+0.102268533 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 23:39:33 np0005480824 podman[260459]: 2025-10-11 03:39:33.071522781 +0000 UTC m=+0.106379621 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:39:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:39:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:39:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:39:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:39:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:42 np0005480824 podman[260496]: 2025-10-11 03:39:42.0759923 +0000 UTC m=+0.133841048 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 10 23:39:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:39:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:39:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4023638871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:39:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:39:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4023638871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:39:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:39:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/541755294' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:39:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:39:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/541755294' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:39:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:39:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1467463127' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:39:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:39:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1467463127' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:39:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:49 np0005480824 podman[260522]: 2025-10-11 03:39:49.015083749 +0000 UTC m=+0.064742828 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 10 23:39:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:39:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:39:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:39:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:39:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:39:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:39:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:39:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:39:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:39:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:40:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:04 np0005480824 podman[260543]: 2025-10-11 03:40:04.04783868 +0000 UTC m=+0.090685181 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 23:40:04 np0005480824 podman[260544]: 2025-10-11 03:40:04.048486905 +0000 UTC m=+0.087940996 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 10 23:40:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:40:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:40:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:40:10.478 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:40:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:40:10.479 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:40:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:40:10.479 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:40:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:13 np0005480824 podman[260580]: 2025-10-11 03:40:13.062197513 +0000 UTC m=+0.116538420 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller)
Oct 10 23:40:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:40:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:40:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:40:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:40:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:40:16 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:40:16 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:40:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:40:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:40:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:40:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:40:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:40:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:40:16 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 8bdabdf0-abdb-4ad8-8cf3-1485846bad69 does not exist
Oct 10 23:40:16 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 9af04b41-61b1-4477-9841-079f6c55c5d2 does not exist
Oct 10 23:40:16 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev e773b909-f505-4f63-b5ac-cb620109fec4 does not exist
Oct 10 23:40:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:40:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:40:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:40:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:40:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:40:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:40:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:16 np0005480824 podman[261002]: 2025-10-11 03:40:16.710172093 +0000 UTC m=+0.042819071 container create a7e2c8794a3cbfb0cd505a397511e2914b833c452c53d0d5bfde72776816da0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:40:16 np0005480824 systemd[1]: Started libpod-conmon-a7e2c8794a3cbfb0cd505a397511e2914b833c452c53d0d5bfde72776816da0e.scope.
Oct 10 23:40:16 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:40:16 np0005480824 podman[261002]: 2025-10-11 03:40:16.691055592 +0000 UTC m=+0.023702540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:40:16 np0005480824 podman[261002]: 2025-10-11 03:40:16.799089691 +0000 UTC m=+0.131736689 container init a7e2c8794a3cbfb0cd505a397511e2914b833c452c53d0d5bfde72776816da0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tesla, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:40:16 np0005480824 podman[261002]: 2025-10-11 03:40:16.811736899 +0000 UTC m=+0.144383837 container start a7e2c8794a3cbfb0cd505a397511e2914b833c452c53d0d5bfde72776816da0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tesla, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:40:16 np0005480824 podman[261002]: 2025-10-11 03:40:16.81517048 +0000 UTC m=+0.147817468 container attach a7e2c8794a3cbfb0cd505a397511e2914b833c452c53d0d5bfde72776816da0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 10 23:40:16 np0005480824 dreamy_tesla[261019]: 167 167
Oct 10 23:40:16 np0005480824 systemd[1]: libpod-a7e2c8794a3cbfb0cd505a397511e2914b833c452c53d0d5bfde72776816da0e.scope: Deactivated successfully.
Oct 10 23:40:16 np0005480824 podman[261002]: 2025-10-11 03:40:16.817825283 +0000 UTC m=+0.150472231 container died a7e2c8794a3cbfb0cd505a397511e2914b833c452c53d0d5bfde72776816da0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:40:16 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ad8fd663740d04b21996f1ea3acc02c9f852477c5973322e2b609db3c34f2b44-merged.mount: Deactivated successfully.
Oct 10 23:40:16 np0005480824 podman[261002]: 2025-10-11 03:40:16.866853589 +0000 UTC m=+0.199500567 container remove a7e2c8794a3cbfb0cd505a397511e2914b833c452c53d0d5bfde72776816da0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:40:16 np0005480824 systemd[1]: libpod-conmon-a7e2c8794a3cbfb0cd505a397511e2914b833c452c53d0d5bfde72776816da0e.scope: Deactivated successfully.
Oct 10 23:40:17 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:40:17 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:40:17 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:40:17 np0005480824 podman[261042]: 2025-10-11 03:40:17.028359061 +0000 UTC m=+0.038402518 container create 91a6909817154baede702b26a8e9870a81f40c52c75fafb7d0b63ed0d185b6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 10 23:40:17 np0005480824 systemd[1]: Started libpod-conmon-91a6909817154baede702b26a8e9870a81f40c52c75fafb7d0b63ed0d185b6f8.scope.
Oct 10 23:40:17 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:40:17 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d8ceccaae58920e1b7e87070b736e482c248dc4bc5f5a124db53b7f91333b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:40:17 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d8ceccaae58920e1b7e87070b736e482c248dc4bc5f5a124db53b7f91333b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:40:17 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d8ceccaae58920e1b7e87070b736e482c248dc4bc5f5a124db53b7f91333b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:40:17 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d8ceccaae58920e1b7e87070b736e482c248dc4bc5f5a124db53b7f91333b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:40:17 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d8ceccaae58920e1b7e87070b736e482c248dc4bc5f5a124db53b7f91333b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:40:17 np0005480824 podman[261042]: 2025-10-11 03:40:17.012123547 +0000 UTC m=+0.022167014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:40:17 np0005480824 podman[261042]: 2025-10-11 03:40:17.111920301 +0000 UTC m=+0.121963828 container init 91a6909817154baede702b26a8e9870a81f40c52c75fafb7d0b63ed0d185b6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 10 23:40:17 np0005480824 podman[261042]: 2025-10-11 03:40:17.117471152 +0000 UTC m=+0.127514609 container start 91a6909817154baede702b26a8e9870a81f40c52c75fafb7d0b63ed0d185b6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:40:17 np0005480824 podman[261042]: 2025-10-11 03:40:17.120660628 +0000 UTC m=+0.130704095 container attach 91a6909817154baede702b26a8e9870a81f40c52c75fafb7d0b63ed0d185b6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:40:17 np0005480824 nova_compute[260089]: 2025-10-11 03:40:17.463 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:40:17 np0005480824 nova_compute[260089]: 2025-10-11 03:40:17.742 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:40:18 np0005480824 crazy_hodgkin[261059]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:40:18 np0005480824 crazy_hodgkin[261059]: --> relative data size: 1.0
Oct 10 23:40:18 np0005480824 crazy_hodgkin[261059]: --> All data devices are unavailable
Oct 10 23:40:18 np0005480824 systemd[1]: libpod-91a6909817154baede702b26a8e9870a81f40c52c75fafb7d0b63ed0d185b6f8.scope: Deactivated successfully.
Oct 10 23:40:18 np0005480824 podman[261042]: 2025-10-11 03:40:18.232256085 +0000 UTC m=+1.242299542 container died 91a6909817154baede702b26a8e9870a81f40c52c75fafb7d0b63ed0d185b6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:40:18 np0005480824 systemd[1]: libpod-91a6909817154baede702b26a8e9870a81f40c52c75fafb7d0b63ed0d185b6f8.scope: Consumed 1.073s CPU time.
Oct 10 23:40:18 np0005480824 systemd[1]: var-lib-containers-storage-overlay-31d8ceccaae58920e1b7e87070b736e482c248dc4bc5f5a124db53b7f91333b8-merged.mount: Deactivated successfully.
Oct 10 23:40:18 np0005480824 podman[261042]: 2025-10-11 03:40:18.306460006 +0000 UTC m=+1.316503453 container remove 91a6909817154baede702b26a8e9870a81f40c52c75fafb7d0b63ed0d185b6f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:40:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:18 np0005480824 systemd[1]: libpod-conmon-91a6909817154baede702b26a8e9870a81f40c52c75fafb7d0b63ed0d185b6f8.scope: Deactivated successfully.
Oct 10 23:40:18 np0005480824 podman[261241]: 2025-10-11 03:40:18.981719808 +0000 UTC m=+0.042331940 container create 191e2a18d5885aa68901ae3374bcf30390df0405c718be8b5f397fe12a4056e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bartik, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:40:19 np0005480824 systemd[1]: Started libpod-conmon-191e2a18d5885aa68901ae3374bcf30390df0405c718be8b5f397fe12a4056e1.scope.
Oct 10 23:40:19 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:40:19 np0005480824 podman[261241]: 2025-10-11 03:40:18.96701103 +0000 UTC m=+0.027623182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:40:19 np0005480824 podman[261241]: 2025-10-11 03:40:19.063454816 +0000 UTC m=+0.124066978 container init 191e2a18d5885aa68901ae3374bcf30390df0405c718be8b5f397fe12a4056e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bartik, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:40:19 np0005480824 podman[261241]: 2025-10-11 03:40:19.072732395 +0000 UTC m=+0.133344537 container start 191e2a18d5885aa68901ae3374bcf30390df0405c718be8b5f397fe12a4056e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bartik, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 10 23:40:19 np0005480824 podman[261241]: 2025-10-11 03:40:19.076396981 +0000 UTC m=+0.137009133 container attach 191e2a18d5885aa68901ae3374bcf30390df0405c718be8b5f397fe12a4056e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bartik, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:40:19 np0005480824 focused_bartik[261258]: 167 167
Oct 10 23:40:19 np0005480824 systemd[1]: libpod-191e2a18d5885aa68901ae3374bcf30390df0405c718be8b5f397fe12a4056e1.scope: Deactivated successfully.
Oct 10 23:40:19 np0005480824 podman[261241]: 2025-10-11 03:40:19.079065304 +0000 UTC m=+0.139677446 container died 191e2a18d5885aa68901ae3374bcf30390df0405c718be8b5f397fe12a4056e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:40:19 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6a88e96946d58da383279681fa3ad41d1984a51b6fc0de924167eb91f55e5b14-merged.mount: Deactivated successfully.
Oct 10 23:40:19 np0005480824 podman[261241]: 2025-10-11 03:40:19.122817217 +0000 UTC m=+0.183429389 container remove 191e2a18d5885aa68901ae3374bcf30390df0405c718be8b5f397fe12a4056e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bartik, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:40:19 np0005480824 systemd[1]: libpod-conmon-191e2a18d5885aa68901ae3374bcf30390df0405c718be8b5f397fe12a4056e1.scope: Deactivated successfully.
Oct 10 23:40:19 np0005480824 podman[261259]: 2025-10-11 03:40:19.161341315 +0000 UTC m=+0.107139238 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 23:40:19 np0005480824 podman[261302]: 2025-10-11 03:40:19.331514601 +0000 UTC m=+0.047810230 container create 1dffa0ef82bd5f3852a29e964fadd4ac719b20ba1efa0e90f47d888afc666016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:40:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:40:19 np0005480824 systemd[1]: Started libpod-conmon-1dffa0ef82bd5f3852a29e964fadd4ac719b20ba1efa0e90f47d888afc666016.scope.
Oct 10 23:40:19 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:40:19 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/611c7d1b8dfe38e36e916fb091d3c5b9b5b26d3de062d42e02b2de900a9d3967/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:40:19 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/611c7d1b8dfe38e36e916fb091d3c5b9b5b26d3de062d42e02b2de900a9d3967/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:40:19 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/611c7d1b8dfe38e36e916fb091d3c5b9b5b26d3de062d42e02b2de900a9d3967/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:40:19 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/611c7d1b8dfe38e36e916fb091d3c5b9b5b26d3de062d42e02b2de900a9d3967/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:40:19 np0005480824 podman[261302]: 2025-10-11 03:40:19.311348215 +0000 UTC m=+0.027643894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:40:19 np0005480824 podman[261302]: 2025-10-11 03:40:19.411273723 +0000 UTC m=+0.127569372 container init 1dffa0ef82bd5f3852a29e964fadd4ac719b20ba1efa0e90f47d888afc666016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_stonebraker, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 23:40:19 np0005480824 podman[261302]: 2025-10-11 03:40:19.424612427 +0000 UTC m=+0.140908066 container start 1dffa0ef82bd5f3852a29e964fadd4ac719b20ba1efa0e90f47d888afc666016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 10 23:40:19 np0005480824 podman[261302]: 2025-10-11 03:40:19.428144451 +0000 UTC m=+0.144440100 container attach 1dffa0ef82bd5f3852a29e964fadd4ac719b20ba1efa0e90f47d888afc666016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]: {
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:    "0": [
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:        {
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "devices": [
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "/dev/loop3"
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            ],
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_name": "ceph_lv0",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_size": "21470642176",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "name": "ceph_lv0",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "tags": {
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.cluster_name": "ceph",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.crush_device_class": "",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.encrypted": "0",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.osd_id": "0",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.type": "block",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.vdo": "0"
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            },
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "type": "block",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "vg_name": "ceph_vg0"
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:        }
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:    ],
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:    "1": [
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:        {
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "devices": [
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "/dev/loop4"
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            ],
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_name": "ceph_lv1",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_size": "21470642176",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "name": "ceph_lv1",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "tags": {
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.cluster_name": "ceph",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.crush_device_class": "",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.encrypted": "0",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.osd_id": "1",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.type": "block",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.vdo": "0"
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            },
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "type": "block",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "vg_name": "ceph_vg1"
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:        }
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:    ],
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:    "2": [
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:        {
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "devices": [
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "/dev/loop5"
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            ],
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_name": "ceph_lv2",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_size": "21470642176",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "name": "ceph_lv2",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "tags": {
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.cluster_name": "ceph",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.crush_device_class": "",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.encrypted": "0",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.osd_id": "2",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.type": "block",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:                "ceph.vdo": "0"
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            },
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "type": "block",
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:            "vg_name": "ceph_vg2"
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:        }
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]:    ]
Oct 10 23:40:20 np0005480824 lucid_stonebraker[261318]: }
Oct 10 23:40:20 np0005480824 systemd[1]: libpod-1dffa0ef82bd5f3852a29e964fadd4ac719b20ba1efa0e90f47d888afc666016.scope: Deactivated successfully.
Oct 10 23:40:20 np0005480824 podman[261302]: 2025-10-11 03:40:20.229219571 +0000 UTC m=+0.945515240 container died 1dffa0ef82bd5f3852a29e964fadd4ac719b20ba1efa0e90f47d888afc666016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 23:40:20 np0005480824 systemd[1]: var-lib-containers-storage-overlay-611c7d1b8dfe38e36e916fb091d3c5b9b5b26d3de062d42e02b2de900a9d3967-merged.mount: Deactivated successfully.
Oct 10 23:40:20 np0005480824 podman[261302]: 2025-10-11 03:40:20.3084561 +0000 UTC m=+1.024751749 container remove 1dffa0ef82bd5f3852a29e964fadd4ac719b20ba1efa0e90f47d888afc666016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_stonebraker, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:40:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:20 np0005480824 systemd[1]: libpod-conmon-1dffa0ef82bd5f3852a29e964fadd4ac719b20ba1efa0e90f47d888afc666016.scope: Deactivated successfully.
Oct 10 23:40:20 np0005480824 podman[261481]: 2025-10-11 03:40:20.965226906 +0000 UTC m=+0.049446227 container create 936a9fce344a12b8b202aa4c9d5d7a4168286e878f099a178474a27b8e644604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 10 23:40:21 np0005480824 systemd[1]: Started libpod-conmon-936a9fce344a12b8b202aa4c9d5d7a4168286e878f099a178474a27b8e644604.scope.
Oct 10 23:40:21 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:40:21 np0005480824 podman[261481]: 2025-10-11 03:40:20.938633098 +0000 UTC m=+0.022852459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:40:21 np0005480824 podman[261481]: 2025-10-11 03:40:21.045796147 +0000 UTC m=+0.130015468 container init 936a9fce344a12b8b202aa4c9d5d7a4168286e878f099a178474a27b8e644604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 23:40:21 np0005480824 podman[261481]: 2025-10-11 03:40:21.053058468 +0000 UTC m=+0.137277749 container start 936a9fce344a12b8b202aa4c9d5d7a4168286e878f099a178474a27b8e644604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatterjee, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:40:21 np0005480824 podman[261481]: 2025-10-11 03:40:21.056538891 +0000 UTC m=+0.140758182 container attach 936a9fce344a12b8b202aa4c9d5d7a4168286e878f099a178474a27b8e644604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 23:40:21 np0005480824 vigilant_chatterjee[261497]: 167 167
Oct 10 23:40:21 np0005480824 systemd[1]: libpod-936a9fce344a12b8b202aa4c9d5d7a4168286e878f099a178474a27b8e644604.scope: Deactivated successfully.
Oct 10 23:40:21 np0005480824 podman[261481]: 2025-10-11 03:40:21.06073729 +0000 UTC m=+0.144956671 container died 936a9fce344a12b8b202aa4c9d5d7a4168286e878f099a178474a27b8e644604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 10 23:40:21 np0005480824 systemd[1]: var-lib-containers-storage-overlay-2d65f850563096625c7caa71ad6486e3764788e08f9094e1643a62e66214efaa-merged.mount: Deactivated successfully.
Oct 10 23:40:21 np0005480824 podman[261481]: 2025-10-11 03:40:21.101157144 +0000 UTC m=+0.185376435 container remove 936a9fce344a12b8b202aa4c9d5d7a4168286e878f099a178474a27b8e644604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:40:21 np0005480824 systemd[1]: libpod-conmon-936a9fce344a12b8b202aa4c9d5d7a4168286e878f099a178474a27b8e644604.scope: Deactivated successfully.
Oct 10 23:40:21 np0005480824 podman[261522]: 2025-10-11 03:40:21.324506243 +0000 UTC m=+0.064731688 container create c35637ec2ff68fce682c39cec8af460651df68689030669b0e524adba37a1bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:40:21 np0005480824 systemd[1]: Started libpod-conmon-c35637ec2ff68fce682c39cec8af460651df68689030669b0e524adba37a1bc6.scope.
Oct 10 23:40:21 np0005480824 podman[261522]: 2025-10-11 03:40:21.296315628 +0000 UTC m=+0.036541143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:40:21 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:40:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91bed987200941081405739a21e99f9187bea83ecca04f2757e6619a34b6cd79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:40:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91bed987200941081405739a21e99f9187bea83ecca04f2757e6619a34b6cd79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:40:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91bed987200941081405739a21e99f9187bea83ecca04f2757e6619a34b6cd79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:40:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91bed987200941081405739a21e99f9187bea83ecca04f2757e6619a34b6cd79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:40:21 np0005480824 podman[261522]: 2025-10-11 03:40:21.442940708 +0000 UTC m=+0.183166193 container init c35637ec2ff68fce682c39cec8af460651df68689030669b0e524adba37a1bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:40:21 np0005480824 podman[261522]: 2025-10-11 03:40:21.45828814 +0000 UTC m=+0.198513585 container start c35637ec2ff68fce682c39cec8af460651df68689030669b0e524adba37a1bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elbakyan, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 10 23:40:21 np0005480824 podman[261522]: 2025-10-11 03:40:21.462478879 +0000 UTC m=+0.202704384 container attach c35637ec2ff68fce682c39cec8af460651df68689030669b0e524adba37a1bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:40:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]: {
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "osd_id": 0,
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "type": "bluestore"
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:    },
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "osd_id": 1,
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "type": "bluestore"
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:    },
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "osd_id": 2,
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:        "type": "bluestore"
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]:    }
Oct 10 23:40:22 np0005480824 boring_elbakyan[261538]: }
Oct 10 23:40:22 np0005480824 systemd[1]: libpod-c35637ec2ff68fce682c39cec8af460651df68689030669b0e524adba37a1bc6.scope: Deactivated successfully.
Oct 10 23:40:22 np0005480824 podman[261522]: 2025-10-11 03:40:22.501900122 +0000 UTC m=+1.242125577 container died c35637ec2ff68fce682c39cec8af460651df68689030669b0e524adba37a1bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:40:22 np0005480824 systemd[1]: libpod-c35637ec2ff68fce682c39cec8af460651df68689030669b0e524adba37a1bc6.scope: Consumed 1.052s CPU time.
Oct 10 23:40:22 np0005480824 systemd[1]: var-lib-containers-storage-overlay-91bed987200941081405739a21e99f9187bea83ecca04f2757e6619a34b6cd79-merged.mount: Deactivated successfully.
Oct 10 23:40:22 np0005480824 podman[261522]: 2025-10-11 03:40:22.555118338 +0000 UTC m=+1.295343743 container remove c35637ec2ff68fce682c39cec8af460651df68689030669b0e524adba37a1bc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elbakyan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 10 23:40:22 np0005480824 systemd[1]: libpod-conmon-c35637ec2ff68fce682c39cec8af460651df68689030669b0e524adba37a1bc6.scope: Deactivated successfully.
Oct 10 23:40:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:40:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:40:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:40:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:40:22 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 348c89de-5cb4-43c5-a04a-c024bb51c6f6 does not exist
Oct 10 23:40:22 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev cdfe0f9c-58b0-4531-9450-780476bfa7b4 does not exist
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.299 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.299 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.300 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.313 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.313 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.313 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.313 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.313 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.314 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.314 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.314 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.314 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.341 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.342 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.342 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.342 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.342 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:40:23 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:40:23 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:40:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:40:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2340728333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.750 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.921 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.923 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5113MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.923 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:40:23 np0005480824 nova_compute[260089]: 2025-10-11 03:40:23.923 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:40:24 np0005480824 nova_compute[260089]: 2025-10-11 03:40:24.090 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:40:24 np0005480824 nova_compute[260089]: 2025-10-11 03:40:24.091 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:40:24 np0005480824 nova_compute[260089]: 2025-10-11 03:40:24.124 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:40:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:40:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:40:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1390902158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:40:24 np0005480824 nova_compute[260089]: 2025-10-11 03:40:24.596 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:40:24 np0005480824 nova_compute[260089]: 2025-10-11 03:40:24.604 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:40:24 np0005480824 nova_compute[260089]: 2025-10-11 03:40:24.645 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:40:24 np0005480824 nova_compute[260089]: 2025-10-11 03:40:24.649 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:40:24 np0005480824 nova_compute[260089]: 2025-10-11 03:40:24.650 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:40:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:40:27
Oct 10 23:40:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:40:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:40:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'vms']
Oct 10 23:40:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:40:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:40:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:40:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:40:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:40:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:40:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:40:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:40:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:40:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:40:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:40:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:40:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:40:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:40:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:40:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:40:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:40:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:40:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 10 23:40:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/473208361' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 10 23:40:29 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14347 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 10 23:40:29 np0005480824 ceph-mgr[74617]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 10 23:40:29 np0005480824 ceph-mgr[74617]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 10 23:40:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:40:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5636 writes, 23K keys, 5636 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5636 writes, 884 syncs, 6.38 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b254ea91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b254ea91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct 10 23:40:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:40:35 np0005480824 podman[261677]: 2025-10-11 03:40:35.048941916 +0000 UTC m=+0.094423938 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 23:40:35 np0005480824 podman[261678]: 2025-10-11 03:40:35.080408119 +0000 UTC m=+0.119411049 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:40:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:40:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6905 writes, 28K keys, 6905 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6905 writes, 1221 syncs, 5.66 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55dbdc4a91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55dbdc4a91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Oct 10 23:40:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:40:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:40:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:40:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:40:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5659 writes, 24K keys, 5659 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5659 writes, 869 syncs, 6.51 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5607b0be0dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5607b0be0dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Oct 10 23:40:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:42 np0005480824 ceph-mgr[74617]: [devicehealth INFO root] Check health
Oct 10 23:40:44 np0005480824 podman[261716]: 2025-10-11 03:40:44.086958088 +0000 UTC m=+0.138604572 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct 10 23:40:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:40:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 10 23:40:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/840276444' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 10 23:40:47 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 10 23:40:47 np0005480824 ceph-mgr[74617]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 10 23:40:47 np0005480824 ceph-mgr[74617]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 10 23:40:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:40:49 np0005480824 podman[261742]: 2025-10-11 03:40:49.984271818 +0000 UTC m=+0.047133543 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 23:40:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:40:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:40:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:40:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:40:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:40:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:40:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:40:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:40:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:41:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:41:06 np0005480824 podman[261763]: 2025-10-11 03:41:06.017530122 +0000 UTC m=+0.065678650 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:41:06 np0005480824 podman[261762]: 2025-10-11 03:41:06.031099342 +0000 UTC m=+0.086510832 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:41:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:41:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:41:10.479 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:41:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:41:10.480 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:41:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:41:10.480 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.838842) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154070838972, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1337, "num_deletes": 251, "total_data_size": 2095583, "memory_usage": 2130064, "flush_reason": "Manual Compaction"}
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154070857392, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2054624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15004, "largest_seqno": 16340, "table_properties": {"data_size": 2048282, "index_size": 3545, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13096, "raw_average_key_size": 19, "raw_value_size": 2035629, "raw_average_value_size": 3056, "num_data_blocks": 163, "num_entries": 666, "num_filter_entries": 666, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760153935, "oldest_key_time": 1760153935, "file_creation_time": 1760154070, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 18645 microseconds, and 9763 cpu microseconds.
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.857508) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2054624 bytes OK
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.857538) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.859890) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.859916) EVENT_LOG_v1 {"time_micros": 1760154070859908, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.859941) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2089597, prev total WAL file size 2089597, number of live WAL files 2.
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.861279) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2006KB)], [35(7315KB)]
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154070861393, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9545908, "oldest_snapshot_seqno": -1}
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4004 keys, 7758778 bytes, temperature: kUnknown
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154070913376, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7758778, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7729598, "index_size": 18062, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97933, "raw_average_key_size": 24, "raw_value_size": 7654669, "raw_average_value_size": 1911, "num_data_blocks": 766, "num_entries": 4004, "num_filter_entries": 4004, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760154070, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.913824) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7758778 bytes
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.916160) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.2 rd, 148.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(8.4) write-amplify(3.8) OK, records in: 4518, records dropped: 514 output_compression: NoCompression
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.916190) EVENT_LOG_v1 {"time_micros": 1760154070916176, "job": 16, "event": "compaction_finished", "compaction_time_micros": 52102, "compaction_time_cpu_micros": 21315, "output_level": 6, "num_output_files": 1, "total_output_size": 7758778, "num_input_records": 4518, "num_output_records": 4004, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154070917071, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154070919743, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.861090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.919843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.919850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.919854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.919858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:41:10 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:41:10.919862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:41:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:41:15 np0005480824 podman[261802]: 2025-10-11 03:41:15.041992625 +0000 UTC m=+0.096893097 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 23:41:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:41:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:21 np0005480824 podman[261828]: 2025-10-11 03:41:21.020112642 +0000 UTC m=+0.074559013 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:41:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:41:23 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 1f307c33-4c4a-45d4-81b2-26f5449519a7 does not exist
Oct 10 23:41:23 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 88cfb591-b6d0-40d4-8159-6cbc82875bb7 does not exist
Oct 10 23:41:23 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev e3463148-8219-4923-bdd3-816975e55a24 does not exist
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:41:23 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:41:24 np0005480824 podman[262118]: 2025-10-11 03:41:24.246999727 +0000 UTC m=+0.047272327 container create 09603fa6c00706f129a5516a777e89f1611007ef2b72c5aee39bee0a94786876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:41:24 np0005480824 systemd[1]: Started libpod-conmon-09603fa6c00706f129a5516a777e89f1611007ef2b72c5aee39bee0a94786876.scope.
Oct 10 23:41:24 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:41:24 np0005480824 podman[262118]: 2025-10-11 03:41:24.222859707 +0000 UTC m=+0.023132307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:41:24 np0005480824 podman[262118]: 2025-10-11 03:41:24.328628014 +0000 UTC m=+0.128900594 container init 09603fa6c00706f129a5516a777e89f1611007ef2b72c5aee39bee0a94786876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:41:24 np0005480824 podman[262118]: 2025-10-11 03:41:24.339832099 +0000 UTC m=+0.140104689 container start 09603fa6c00706f129a5516a777e89f1611007ef2b72c5aee39bee0a94786876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 10 23:41:24 np0005480824 podman[262118]: 2025-10-11 03:41:24.343106847 +0000 UTC m=+0.143379427 container attach 09603fa6c00706f129a5516a777e89f1611007ef2b72c5aee39bee0a94786876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:41:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:24 np0005480824 awesome_elgamal[262134]: 167 167
Oct 10 23:41:24 np0005480824 systemd[1]: libpod-09603fa6c00706f129a5516a777e89f1611007ef2b72c5aee39bee0a94786876.scope: Deactivated successfully.
Oct 10 23:41:24 np0005480824 conmon[262134]: conmon 09603fa6c00706f129a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-09603fa6c00706f129a5516a777e89f1611007ef2b72c5aee39bee0a94786876.scope/container/memory.events
Oct 10 23:41:24 np0005480824 podman[262118]: 2025-10-11 03:41:24.346906747 +0000 UTC m=+0.147179337 container died 09603fa6c00706f129a5516a777e89f1611007ef2b72c5aee39bee0a94786876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:41:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:41:24 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6c2b8d3829aa9dd0e46a99fc2dbd729fb7377cab05e9d0d4cdb9f8e9bf7255e0-merged.mount: Deactivated successfully.
Oct 10 23:41:24 np0005480824 podman[262118]: 2025-10-11 03:41:24.386829429 +0000 UTC m=+0.187101999 container remove 09603fa6c00706f129a5516a777e89f1611007ef2b72c5aee39bee0a94786876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 23:41:24 np0005480824 systemd[1]: libpod-conmon-09603fa6c00706f129a5516a777e89f1611007ef2b72c5aee39bee0a94786876.scope: Deactivated successfully.
Oct 10 23:41:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:41:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/806750222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:41:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:41:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/806750222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:41:24 np0005480824 podman[262158]: 2025-10-11 03:41:24.576476118 +0000 UTC m=+0.044236706 container create 2d800e302673b61fa44593d7cc130d37c17f9dc4dc209fe3f917fef0c5f21bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 10 23:41:24 np0005480824 systemd[1]: Started libpod-conmon-2d800e302673b61fa44593d7cc130d37c17f9dc4dc209fe3f917fef0c5f21bb1.scope.
Oct 10 23:41:24 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:41:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/295e6e1843a0cb8363fb9901f353abccd2dd29fa6db55d9bf638cb4a8e3251a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:41:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/295e6e1843a0cb8363fb9901f353abccd2dd29fa6db55d9bf638cb4a8e3251a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:41:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/295e6e1843a0cb8363fb9901f353abccd2dd29fa6db55d9bf638cb4a8e3251a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:41:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/295e6e1843a0cb8363fb9901f353abccd2dd29fa6db55d9bf638cb4a8e3251a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:41:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/295e6e1843a0cb8363fb9901f353abccd2dd29fa6db55d9bf638cb4a8e3251a7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:41:24 np0005480824 nova_compute[260089]: 2025-10-11 03:41:24.643 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:41:24 np0005480824 nova_compute[260089]: 2025-10-11 03:41:24.645 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:41:24 np0005480824 podman[262158]: 2025-10-11 03:41:24.559876646 +0000 UTC m=+0.027637244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:41:24 np0005480824 podman[262158]: 2025-10-11 03:41:24.661089665 +0000 UTC m=+0.128850313 container init 2d800e302673b61fa44593d7cc130d37c17f9dc4dc209fe3f917fef0c5f21bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:41:24 np0005480824 nova_compute[260089]: 2025-10-11 03:41:24.666 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:41:24 np0005480824 nova_compute[260089]: 2025-10-11 03:41:24.667 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:41:24 np0005480824 nova_compute[260089]: 2025-10-11 03:41:24.667 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:41:24 np0005480824 nova_compute[260089]: 2025-10-11 03:41:24.667 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:41:24 np0005480824 nova_compute[260089]: 2025-10-11 03:41:24.668 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:41:24 np0005480824 nova_compute[260089]: 2025-10-11 03:41:24.668 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:41:24 np0005480824 podman[262158]: 2025-10-11 03:41:24.675176678 +0000 UTC m=+0.142937256 container start 2d800e302673b61fa44593d7cc130d37c17f9dc4dc209fe3f917fef0c5f21bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 23:41:24 np0005480824 podman[262158]: 2025-10-11 03:41:24.678789924 +0000 UTC m=+0.146550582 container attach 2d800e302673b61fa44593d7cc130d37c17f9dc4dc209fe3f917fef0c5f21bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:41:24 np0005480824 nova_compute[260089]: 2025-10-11 03:41:24.690 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:41:24 np0005480824 nova_compute[260089]: 2025-10-11 03:41:24.691 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:41:24 np0005480824 nova_compute[260089]: 2025-10-11 03:41:24.691 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:41:24 np0005480824 nova_compute[260089]: 2025-10-11 03:41:24.691 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:41:24 np0005480824 nova_compute[260089]: 2025-10-11 03:41:24.691 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:41:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:41:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4151321302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:41:25 np0005480824 nova_compute[260089]: 2025-10-11 03:41:25.145 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:41:25 np0005480824 nova_compute[260089]: 2025-10-11 03:41:25.279 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:41:25 np0005480824 nova_compute[260089]: 2025-10-11 03:41:25.280 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5122MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:41:25 np0005480824 nova_compute[260089]: 2025-10-11 03:41:25.280 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:41:25 np0005480824 nova_compute[260089]: 2025-10-11 03:41:25.280 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:41:25 np0005480824 nova_compute[260089]: 2025-10-11 03:41:25.356 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:41:25 np0005480824 nova_compute[260089]: 2025-10-11 03:41:25.357 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:41:25 np0005480824 nova_compute[260089]: 2025-10-11 03:41:25.387 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:41:25 np0005480824 laughing_bardeen[262172]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:41:25 np0005480824 laughing_bardeen[262172]: --> relative data size: 1.0
Oct 10 23:41:25 np0005480824 laughing_bardeen[262172]: --> All data devices are unavailable
Oct 10 23:41:25 np0005480824 systemd[1]: libpod-2d800e302673b61fa44593d7cc130d37c17f9dc4dc209fe3f917fef0c5f21bb1.scope: Deactivated successfully.
Oct 10 23:41:25 np0005480824 systemd[1]: libpod-2d800e302673b61fa44593d7cc130d37c17f9dc4dc209fe3f917fef0c5f21bb1.scope: Consumed 1.023s CPU time.
Oct 10 23:41:25 np0005480824 podman[262243]: 2025-10-11 03:41:25.797377606 +0000 UTC m=+0.022051771 container died 2d800e302673b61fa44593d7cc130d37c17f9dc4dc209fe3f917fef0c5f21bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:41:25 np0005480824 systemd[1]: var-lib-containers-storage-overlay-295e6e1843a0cb8363fb9901f353abccd2dd29fa6db55d9bf638cb4a8e3251a7-merged.mount: Deactivated successfully.
Oct 10 23:41:25 np0005480824 podman[262243]: 2025-10-11 03:41:25.846278691 +0000 UTC m=+0.070952846 container remove 2d800e302673b61fa44593d7cc130d37c17f9dc4dc209fe3f917fef0c5f21bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 23:41:25 np0005480824 systemd[1]: libpod-conmon-2d800e302673b61fa44593d7cc130d37c17f9dc4dc209fe3f917fef0c5f21bb1.scope: Deactivated successfully.
Oct 10 23:41:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:41:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1742566996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:41:25 np0005480824 nova_compute[260089]: 2025-10-11 03:41:25.895 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:41:25 np0005480824 nova_compute[260089]: 2025-10-11 03:41:25.902 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:41:25 np0005480824 nova_compute[260089]: 2025-10-11 03:41:25.917 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:41:25 np0005480824 nova_compute[260089]: 2025-10-11 03:41:25.918 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:41:25 np0005480824 nova_compute[260089]: 2025-10-11 03:41:25.919 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:41:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:26 np0005480824 podman[262398]: 2025-10-11 03:41:26.417054419 +0000 UTC m=+0.045714321 container create 38d22ba2d1c4d3119b31757ab2586972a5c9cdc633c1ceb406b9b8796398e310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 10 23:41:26 np0005480824 systemd[1]: Started libpod-conmon-38d22ba2d1c4d3119b31757ab2586972a5c9cdc633c1ceb406b9b8796398e310.scope.
Oct 10 23:41:26 np0005480824 podman[262398]: 2025-10-11 03:41:26.394285952 +0000 UTC m=+0.022945854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:41:26 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:41:26 np0005480824 podman[262398]: 2025-10-11 03:41:26.520923232 +0000 UTC m=+0.149583154 container init 38d22ba2d1c4d3119b31757ab2586972a5c9cdc633c1ceb406b9b8796398e310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 10 23:41:26 np0005480824 podman[262398]: 2025-10-11 03:41:26.527565259 +0000 UTC m=+0.156225151 container start 38d22ba2d1c4d3119b31757ab2586972a5c9cdc633c1ceb406b9b8796398e310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:41:26 np0005480824 podman[262398]: 2025-10-11 03:41:26.531486621 +0000 UTC m=+0.160146503 container attach 38d22ba2d1c4d3119b31757ab2586972a5c9cdc633c1ceb406b9b8796398e310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 10 23:41:26 np0005480824 sleepy_blackburn[262414]: 167 167
Oct 10 23:41:26 np0005480824 systemd[1]: libpod-38d22ba2d1c4d3119b31757ab2586972a5c9cdc633c1ceb406b9b8796398e310.scope: Deactivated successfully.
Oct 10 23:41:26 np0005480824 podman[262398]: 2025-10-11 03:41:26.538363384 +0000 UTC m=+0.167023236 container died 38d22ba2d1c4d3119b31757ab2586972a5c9cdc633c1ceb406b9b8796398e310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:41:26 np0005480824 nova_compute[260089]: 2025-10-11 03:41:26.548 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:41:26 np0005480824 nova_compute[260089]: 2025-10-11 03:41:26.549 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:41:26 np0005480824 nova_compute[260089]: 2025-10-11 03:41:26.549 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:41:26 np0005480824 nova_compute[260089]: 2025-10-11 03:41:26.562 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 23:41:26 np0005480824 nova_compute[260089]: 2025-10-11 03:41:26.562 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:41:26 np0005480824 nova_compute[260089]: 2025-10-11 03:41:26.562 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:41:26 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3f0671f683b8904efc10ead9879b66e6e7ee7d4eba7f1537211552aa30bb73c9-merged.mount: Deactivated successfully.
Oct 10 23:41:26 np0005480824 podman[262398]: 2025-10-11 03:41:26.587107615 +0000 UTC m=+0.215767467 container remove 38d22ba2d1c4d3119b31757ab2586972a5c9cdc633c1ceb406b9b8796398e310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:41:26 np0005480824 systemd[1]: libpod-conmon-38d22ba2d1c4d3119b31757ab2586972a5c9cdc633c1ceb406b9b8796398e310.scope: Deactivated successfully.
Oct 10 23:41:26 np0005480824 podman[262440]: 2025-10-11 03:41:26.800675217 +0000 UTC m=+0.066330386 container create 451b7d2fd01dc0efd3cac208aa623db2dc95bac996eb5dddb0099aa1cec94d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:41:26 np0005480824 systemd[1]: Started libpod-conmon-451b7d2fd01dc0efd3cac208aa623db2dc95bac996eb5dddb0099aa1cec94d85.scope.
Oct 10 23:41:26 np0005480824 podman[262440]: 2025-10-11 03:41:26.782722573 +0000 UTC m=+0.048377742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:41:26 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:41:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae6d0888a9dab59aa8351003bc40dcd2db73bae4888bca305ee505d1b5b70874/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:41:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae6d0888a9dab59aa8351003bc40dcd2db73bae4888bca305ee505d1b5b70874/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:41:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae6d0888a9dab59aa8351003bc40dcd2db73bae4888bca305ee505d1b5b70874/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:41:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae6d0888a9dab59aa8351003bc40dcd2db73bae4888bca305ee505d1b5b70874/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:41:26 np0005480824 podman[262440]: 2025-10-11 03:41:26.895561118 +0000 UTC m=+0.161216277 container init 451b7d2fd01dc0efd3cac208aa623db2dc95bac996eb5dddb0099aa1cec94d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 10 23:41:26 np0005480824 podman[262440]: 2025-10-11 03:41:26.901967449 +0000 UTC m=+0.167622588 container start 451b7d2fd01dc0efd3cac208aa623db2dc95bac996eb5dddb0099aa1cec94d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 10 23:41:26 np0005480824 podman[262440]: 2025-10-11 03:41:26.90491803 +0000 UTC m=+0.170573199 container attach 451b7d2fd01dc0efd3cac208aa623db2dc95bac996eb5dddb0099aa1cec94d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]: {
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:    "0": [
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:        {
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "devices": [
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "/dev/loop3"
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            ],
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_name": "ceph_lv0",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_size": "21470642176",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "name": "ceph_lv0",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "tags": {
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.cluster_name": "ceph",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.crush_device_class": "",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.encrypted": "0",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.osd_id": "0",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.type": "block",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.vdo": "0"
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            },
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "type": "block",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "vg_name": "ceph_vg0"
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:        }
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:    ],
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:    "1": [
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:        {
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "devices": [
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "/dev/loop4"
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            ],
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_name": "ceph_lv1",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_size": "21470642176",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "name": "ceph_lv1",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "tags": {
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.cluster_name": "ceph",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.crush_device_class": "",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.encrypted": "0",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.osd_id": "1",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.type": "block",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.vdo": "0"
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            },
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "type": "block",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "vg_name": "ceph_vg1"
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:        }
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:    ],
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:    "2": [
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:        {
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "devices": [
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "/dev/loop5"
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            ],
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_name": "ceph_lv2",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_size": "21470642176",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "name": "ceph_lv2",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "tags": {
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.cluster_name": "ceph",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.crush_device_class": "",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.encrypted": "0",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.osd_id": "2",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.type": "block",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:                "ceph.vdo": "0"
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            },
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "type": "block",
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:            "vg_name": "ceph_vg2"
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:        }
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]:    ]
Oct 10 23:41:27 np0005480824 reverent_snyder[262456]: }
Oct 10 23:41:27 np0005480824 systemd[1]: libpod-451b7d2fd01dc0efd3cac208aa623db2dc95bac996eb5dddb0099aa1cec94d85.scope: Deactivated successfully.
Oct 10 23:41:27 np0005480824 podman[262440]: 2025-10-11 03:41:27.633110514 +0000 UTC m=+0.898765653 container died 451b7d2fd01dc0efd3cac208aa623db2dc95bac996eb5dddb0099aa1cec94d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 10 23:41:27 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ae6d0888a9dab59aa8351003bc40dcd2db73bae4888bca305ee505d1b5b70874-merged.mount: Deactivated successfully.
Oct 10 23:41:27 np0005480824 podman[262440]: 2025-10-11 03:41:27.684682241 +0000 UTC m=+0.950337380 container remove 451b7d2fd01dc0efd3cac208aa623db2dc95bac996eb5dddb0099aa1cec94d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:41:27 np0005480824 systemd[1]: libpod-conmon-451b7d2fd01dc0efd3cac208aa623db2dc95bac996eb5dddb0099aa1cec94d85.scope: Deactivated successfully.
Oct 10 23:41:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:41:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:41:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:41:27
Oct 10 23:41:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:41:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:41:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr']
Oct 10 23:41:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:41:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:41:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:41:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:41:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:41:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:41:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:41:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:41:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:41:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:41:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:41:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:41:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:41:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:41:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:41:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:28 np0005480824 podman[262622]: 2025-10-11 03:41:28.519130145 +0000 UTC m=+0.045118946 container create c25b3e1376051feb05029eb60c23cf328afe5a413dc9999a8a99c72fdc7a95ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 23:41:28 np0005480824 systemd[1]: Started libpod-conmon-c25b3e1376051feb05029eb60c23cf328afe5a413dc9999a8a99c72fdc7a95ee.scope.
Oct 10 23:41:28 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:41:28 np0005480824 podman[262622]: 2025-10-11 03:41:28.502499003 +0000 UTC m=+0.028487814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:41:28 np0005480824 podman[262622]: 2025-10-11 03:41:28.607166434 +0000 UTC m=+0.133155275 container init c25b3e1376051feb05029eb60c23cf328afe5a413dc9999a8a99c72fdc7a95ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_germain, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 10 23:41:28 np0005480824 podman[262622]: 2025-10-11 03:41:28.620210203 +0000 UTC m=+0.146198994 container start c25b3e1376051feb05029eb60c23cf328afe5a413dc9999a8a99c72fdc7a95ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_germain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:41:28 np0005480824 podman[262622]: 2025-10-11 03:41:28.623336956 +0000 UTC m=+0.149325757 container attach c25b3e1376051feb05029eb60c23cf328afe5a413dc9999a8a99c72fdc7a95ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_germain, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:41:28 np0005480824 keen_germain[262638]: 167 167
Oct 10 23:41:28 np0005480824 systemd[1]: libpod-c25b3e1376051feb05029eb60c23cf328afe5a413dc9999a8a99c72fdc7a95ee.scope: Deactivated successfully.
Oct 10 23:41:28 np0005480824 podman[262622]: 2025-10-11 03:41:28.62645641 +0000 UTC m=+0.152445201 container died c25b3e1376051feb05029eb60c23cf328afe5a413dc9999a8a99c72fdc7a95ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_germain, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:41:28 np0005480824 systemd[1]: var-lib-containers-storage-overlay-8cc30392fc70b456ea8e81ec3173ef38970eeb1a2da9b1f2d516795002bba379-merged.mount: Deactivated successfully.
Oct 10 23:41:28 np0005480824 podman[262622]: 2025-10-11 03:41:28.667614081 +0000 UTC m=+0.193602882 container remove c25b3e1376051feb05029eb60c23cf328afe5a413dc9999a8a99c72fdc7a95ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_germain, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:41:28 np0005480824 systemd[1]: libpod-conmon-c25b3e1376051feb05029eb60c23cf328afe5a413dc9999a8a99c72fdc7a95ee.scope: Deactivated successfully.
Oct 10 23:41:28 np0005480824 podman[262663]: 2025-10-11 03:41:28.909201286 +0000 UTC m=+0.074408317 container create 9ce0e2dd93b93e321c8d8b9a6c8b161cca64ec0498aaa023da64c03e5ca94fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldberg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:41:28 np0005480824 systemd[1]: Started libpod-conmon-9ce0e2dd93b93e321c8d8b9a6c8b161cca64ec0498aaa023da64c03e5ca94fba.scope.
Oct 10 23:41:28 np0005480824 podman[262663]: 2025-10-11 03:41:28.862810501 +0000 UTC m=+0.028017582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:41:28 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:41:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe73c92f62ebc80e453b17e6ebbf191fca663778be78d72c1841ace2701691d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:41:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe73c92f62ebc80e453b17e6ebbf191fca663778be78d72c1841ace2701691d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:41:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe73c92f62ebc80e453b17e6ebbf191fca663778be78d72c1841ace2701691d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:41:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe73c92f62ebc80e453b17e6ebbf191fca663778be78d72c1841ace2701691d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:41:29 np0005480824 podman[262663]: 2025-10-11 03:41:29.007032757 +0000 UTC m=+0.172239808 container init 9ce0e2dd93b93e321c8d8b9a6c8b161cca64ec0498aaa023da64c03e5ca94fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:41:29 np0005480824 podman[262663]: 2025-10-11 03:41:29.014350379 +0000 UTC m=+0.179557410 container start 9ce0e2dd93b93e321c8d8b9a6c8b161cca64ec0498aaa023da64c03e5ca94fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:41:29 np0005480824 podman[262663]: 2025-10-11 03:41:29.017551894 +0000 UTC m=+0.182758965 container attach 9ce0e2dd93b93e321c8d8b9a6c8b161cca64ec0498aaa023da64c03e5ca94fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:41:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]: {
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "osd_id": 0,
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "type": "bluestore"
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:    },
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "osd_id": 1,
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "type": "bluestore"
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:    },
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "osd_id": 2,
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:        "type": "bluestore"
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]:    }
Oct 10 23:41:30 np0005480824 hopeful_goldberg[262680]: }
Oct 10 23:41:30 np0005480824 systemd[1]: libpod-9ce0e2dd93b93e321c8d8b9a6c8b161cca64ec0498aaa023da64c03e5ca94fba.scope: Deactivated successfully.
Oct 10 23:41:30 np0005480824 podman[262663]: 2025-10-11 03:41:30.092463827 +0000 UTC m=+1.257670858 container died 9ce0e2dd93b93e321c8d8b9a6c8b161cca64ec0498aaa023da64c03e5ca94fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 23:41:30 np0005480824 systemd[1]: libpod-9ce0e2dd93b93e321c8d8b9a6c8b161cca64ec0498aaa023da64c03e5ca94fba.scope: Consumed 1.085s CPU time.
Oct 10 23:41:30 np0005480824 systemd[1]: var-lib-containers-storage-overlay-fe73c92f62ebc80e453b17e6ebbf191fca663778be78d72c1841ace2701691d3-merged.mount: Deactivated successfully.
Oct 10 23:41:30 np0005480824 podman[262663]: 2025-10-11 03:41:30.146853951 +0000 UTC m=+1.312060982 container remove 9ce0e2dd93b93e321c8d8b9a6c8b161cca64ec0498aaa023da64c03e5ca94fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:41:30 np0005480824 systemd[1]: libpod-conmon-9ce0e2dd93b93e321c8d8b9a6c8b161cca64ec0498aaa023da64c03e5ca94fba.scope: Deactivated successfully.
Oct 10 23:41:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:41:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:41:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:41:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:41:30 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 9856d4b4-b912-40f7-b224-83ee81b39352 does not exist
Oct 10 23:41:30 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 1ae0ab0a-0124-49e7-b4f9-e47fb560b3c0 does not exist
Oct 10 23:41:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:31 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:41:31 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:41:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:41:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:36 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:41:36.413 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:41:36 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:41:36.415 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:41:36 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:41:36.417 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:41:37 np0005480824 podman[262777]: 2025-10-11 03:41:37.068228006 +0000 UTC m=+0.105912551 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 23:41:37 np0005480824 podman[262776]: 2025-10-11 03:41:37.077915525 +0000 UTC m=+0.116371859 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:41:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:41:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:41:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:41:46 np0005480824 podman[262813]: 2025-10-11 03:41:46.042779994 +0000 UTC m=+0.094845781 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 10 23:41:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:41:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:52 np0005480824 podman[262840]: 2025-10-11 03:41:52.016370128 +0000 UTC m=+0.072253566 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:41:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:41:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:41:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:41:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:41:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:41:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:41:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:41:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:41:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:42:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct 10 23:42:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Oct 10 23:42:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:42:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Oct 10 23:42:08 np0005480824 podman[262860]: 2025-10-11 03:42:08.035202112 +0000 UTC m=+0.089426402 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:42:08 np0005480824 podman[262861]: 2025-10-11 03:42:08.06007071 +0000 UTC m=+0.109319742 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible)
Oct 10 23:42:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 10 23:42:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:42:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 10 23:42:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:42:10.481 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:42:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:42:10.482 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:42:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:42:10.482 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:42:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 10 23:42:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Oct 10 23:42:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:42:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Oct 10 23:42:17 np0005480824 podman[262895]: 2025-10-11 03:42:17.083509851 +0000 UTC m=+0.138345978 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 10 23:42:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Oct 10 23:42:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:42:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:23 np0005480824 podman[262922]: 2025-10-11 03:42:23.023138604 +0000 UTC m=+0.069848020 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 10 23:42:23 np0005480824 nova_compute[260089]: 2025-10-11 03:42:23.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:42:23 np0005480824 nova_compute[260089]: 2025-10-11 03:42:23.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:42:23 np0005480824 nova_compute[260089]: 2025-10-11 03:42:23.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:42:24 np0005480824 nova_compute[260089]: 2025-10-11 03:42:24.293 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:42:24 np0005480824 nova_compute[260089]: 2025-10-11 03:42:24.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:42:24 np0005480824 nova_compute[260089]: 2025-10-11 03:42:24.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:42:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:42:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:42:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/718900451' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:42:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:42:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/718900451' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:42:25 np0005480824 nova_compute[260089]: 2025-10-11 03:42:25.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:42:25 np0005480824 nova_compute[260089]: 2025-10-11 03:42:25.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:42:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:26 np0005480824 nova_compute[260089]: 2025-10-11 03:42:26.513 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:42:26 np0005480824 nova_compute[260089]: 2025-10-11 03:42:26.513 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:42:26 np0005480824 nova_compute[260089]: 2025-10-11 03:42:26.514 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:42:26 np0005480824 nova_compute[260089]: 2025-10-11 03:42:26.514 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:42:26 np0005480824 nova_compute[260089]: 2025-10-11 03:42:26.514 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:42:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:42:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040095002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:42:27 np0005480824 nova_compute[260089]: 2025-10-11 03:42:27.027 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:42:27 np0005480824 nova_compute[260089]: 2025-10-11 03:42:27.202 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:42:27 np0005480824 nova_compute[260089]: 2025-10-11 03:42:27.204 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5190MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:42:27 np0005480824 nova_compute[260089]: 2025-10-11 03:42:27.204 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:42:27 np0005480824 nova_compute[260089]: 2025-10-11 03:42:27.204 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:42:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:42:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:42:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:42:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:42:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:42:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:42:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:42:27
Oct 10 23:42:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:42:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:42:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'volumes', 'images']
Oct 10 23:42:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:42:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:42:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:42:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:42:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:42:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:42:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:42:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:42:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:42:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:42:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:42:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:42:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:30 np0005480824 nova_compute[260089]: 2025-10-11 03:42:30.477 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:42:30 np0005480824 nova_compute[260089]: 2025-10-11 03:42:30.477 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:42:30 np0005480824 nova_compute[260089]: 2025-10-11 03:42:30.492 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:42:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:42:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2383110290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:42:30 np0005480824 nova_compute[260089]: 2025-10-11 03:42:30.930 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:42:30 np0005480824 nova_compute[260089]: 2025-10-11 03:42:30.938 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:42:31 np0005480824 podman[263158]: 2025-10-11 03:42:31.084052727 +0000 UTC m=+0.072071473 container exec a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:42:31 np0005480824 podman[263158]: 2025-10-11 03:42:31.176110221 +0000 UTC m=+0.164128987 container exec_died a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 10 23:42:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:42:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:42:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:42:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:42:32 np0005480824 nova_compute[260089]: 2025-10-11 03:42:32.013 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:42:32 np0005480824 nova_compute[260089]: 2025-10-11 03:42:32.017 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:42:32 np0005480824 nova_compute[260089]: 2025-10-11 03:42:32.017 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:42:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:42:32 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 93a74e22-8b9e-4f8d-8cdd-8809c7d8839c does not exist
Oct 10 23:42:32 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 11a303a7-49b3-476f-a78d-89421a4e28d0 does not exist
Oct 10 23:42:32 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 92c296d3-dab8-4160-8d82-7cd98a6c7089 does not exist
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:42:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:42:33 np0005480824 nova_compute[260089]: 2025-10-11 03:42:33.019 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:42:33 np0005480824 nova_compute[260089]: 2025-10-11 03:42:33.020 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:42:33 np0005480824 nova_compute[260089]: 2025-10-11 03:42:33.020 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:42:33 np0005480824 nova_compute[260089]: 2025-10-11 03:42:33.065 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 23:42:33 np0005480824 nova_compute[260089]: 2025-10-11 03:42:33.066 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:42:33 np0005480824 podman[263588]: 2025-10-11 03:42:33.331129837 +0000 UTC m=+0.047460672 container create ba13f0c21597fdd68c4b3e1c6419a08f60421237cb37056e5c11ab6e58d09ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:42:33 np0005480824 systemd[1]: Started libpod-conmon-ba13f0c21597fdd68c4b3e1c6419a08f60421237cb37056e5c11ab6e58d09ea3.scope.
Oct 10 23:42:33 np0005480824 podman[263588]: 2025-10-11 03:42:33.309096847 +0000 UTC m=+0.025427662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:42:33 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:42:33 np0005480824 podman[263588]: 2025-10-11 03:42:33.438562534 +0000 UTC m=+0.154893419 container init ba13f0c21597fdd68c4b3e1c6419a08f60421237cb37056e5c11ab6e58d09ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_driscoll, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:42:33 np0005480824 podman[263588]: 2025-10-11 03:42:33.452290189 +0000 UTC m=+0.168620994 container start ba13f0c21597fdd68c4b3e1c6419a08f60421237cb37056e5c11ab6e58d09ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:42:33 np0005480824 podman[263588]: 2025-10-11 03:42:33.456096238 +0000 UTC m=+0.172427113 container attach ba13f0c21597fdd68c4b3e1c6419a08f60421237cb37056e5c11ab6e58d09ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 10 23:42:33 np0005480824 lucid_driscoll[263604]: 167 167
Oct 10 23:42:33 np0005480824 systemd[1]: libpod-ba13f0c21597fdd68c4b3e1c6419a08f60421237cb37056e5c11ab6e58d09ea3.scope: Deactivated successfully.
Oct 10 23:42:33 np0005480824 podman[263588]: 2025-10-11 03:42:33.460713417 +0000 UTC m=+0.177044212 container died ba13f0c21597fdd68c4b3e1c6419a08f60421237cb37056e5c11ab6e58d09ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:42:33 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6e353e71e8b581ae0d5179e4d8f8a64b858ebbf3f77552364a4b0e18b09e3f4d-merged.mount: Deactivated successfully.
Oct 10 23:42:33 np0005480824 podman[263588]: 2025-10-11 03:42:33.515217575 +0000 UTC m=+0.231548380 container remove ba13f0c21597fdd68c4b3e1c6419a08f60421237cb37056e5c11ab6e58d09ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_driscoll, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:42:33 np0005480824 systemd[1]: libpod-conmon-ba13f0c21597fdd68c4b3e1c6419a08f60421237cb37056e5c11ab6e58d09ea3.scope: Deactivated successfully.
Oct 10 23:42:33 np0005480824 podman[263628]: 2025-10-11 03:42:33.742326667 +0000 UTC m=+0.071553330 container create 9ed42e090c027ef89eadb5d885568af659dc60a221541b140cdb6c6710f0e02f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:42:33 np0005480824 systemd[1]: Started libpod-conmon-9ed42e090c027ef89eadb5d885568af659dc60a221541b140cdb6c6710f0e02f.scope.
Oct 10 23:42:33 np0005480824 podman[263628]: 2025-10-11 03:42:33.712064702 +0000 UTC m=+0.041291425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:42:33 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:42:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd08f058ae986f076853f382895f89c4270fbfa90f46b51ebbf3cc6b637649a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:42:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd08f058ae986f076853f382895f89c4270fbfa90f46b51ebbf3cc6b637649a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:42:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd08f058ae986f076853f382895f89c4270fbfa90f46b51ebbf3cc6b637649a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:42:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd08f058ae986f076853f382895f89c4270fbfa90f46b51ebbf3cc6b637649a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:42:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd08f058ae986f076853f382895f89c4270fbfa90f46b51ebbf3cc6b637649a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:42:33 np0005480824 podman[263628]: 2025-10-11 03:42:33.864156764 +0000 UTC m=+0.193383407 container init 9ed42e090c027ef89eadb5d885568af659dc60a221541b140cdb6c6710f0e02f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:42:33 np0005480824 podman[263628]: 2025-10-11 03:42:33.871735642 +0000 UTC m=+0.200962275 container start 9ed42e090c027ef89eadb5d885568af659dc60a221541b140cdb6c6710f0e02f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gagarin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 23:42:33 np0005480824 podman[263628]: 2025-10-11 03:42:33.874731843 +0000 UTC m=+0.203958506 container attach 9ed42e090c027ef89eadb5d885568af659dc60a221541b140cdb6c6710f0e02f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 23:42:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:42:34 np0005480824 quizzical_gagarin[263644]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:42:34 np0005480824 quizzical_gagarin[263644]: --> relative data size: 1.0
Oct 10 23:42:34 np0005480824 quizzical_gagarin[263644]: --> All data devices are unavailable
Oct 10 23:42:34 np0005480824 systemd[1]: libpod-9ed42e090c027ef89eadb5d885568af659dc60a221541b140cdb6c6710f0e02f.scope: Deactivated successfully.
Oct 10 23:42:34 np0005480824 systemd[1]: libpod-9ed42e090c027ef89eadb5d885568af659dc60a221541b140cdb6c6710f0e02f.scope: Consumed 1.051s CPU time.
Oct 10 23:42:35 np0005480824 podman[263673]: 2025-10-11 03:42:35.057762498 +0000 UTC m=+0.044732547 container died 9ed42e090c027ef89eadb5d885568af659dc60a221541b140cdb6c6710f0e02f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gagarin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:42:35 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0bd08f058ae986f076853f382895f89c4270fbfa90f46b51ebbf3cc6b637649a-merged.mount: Deactivated successfully.
Oct 10 23:42:35 np0005480824 podman[263673]: 2025-10-11 03:42:35.124947755 +0000 UTC m=+0.111917714 container remove 9ed42e090c027ef89eadb5d885568af659dc60a221541b140cdb6c6710f0e02f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 10 23:42:35 np0005480824 systemd[1]: libpod-conmon-9ed42e090c027ef89eadb5d885568af659dc60a221541b140cdb6c6710f0e02f.scope: Deactivated successfully.
Oct 10 23:42:35 np0005480824 podman[263828]: 2025-10-11 03:42:35.875005216 +0000 UTC m=+0.052371547 container create f8c1bf84f35574c2c83f4be442db16df365416a15327eee0ad38ba8c6853864f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:42:35 np0005480824 systemd[1]: Started libpod-conmon-f8c1bf84f35574c2c83f4be442db16df365416a15327eee0ad38ba8c6853864f.scope.
Oct 10 23:42:35 np0005480824 podman[263828]: 2025-10-11 03:42:35.853913928 +0000 UTC m=+0.031280249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:42:35 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:42:36 np0005480824 podman[263828]: 2025-10-11 03:42:36.013885055 +0000 UTC m=+0.191251436 container init f8c1bf84f35574c2c83f4be442db16df365416a15327eee0ad38ba8c6853864f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 23:42:36 np0005480824 podman[263828]: 2025-10-11 03:42:36.027020736 +0000 UTC m=+0.204387047 container start f8c1bf84f35574c2c83f4be442db16df365416a15327eee0ad38ba8c6853864f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:42:36 np0005480824 podman[263828]: 2025-10-11 03:42:36.030722403 +0000 UTC m=+0.208088734 container attach f8c1bf84f35574c2c83f4be442db16df365416a15327eee0ad38ba8c6853864f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:42:36 np0005480824 loving_chebyshev[263845]: 167 167
Oct 10 23:42:36 np0005480824 systemd[1]: libpod-f8c1bf84f35574c2c83f4be442db16df365416a15327eee0ad38ba8c6853864f.scope: Deactivated successfully.
Oct 10 23:42:36 np0005480824 podman[263828]: 2025-10-11 03:42:36.033819686 +0000 UTC m=+0.211186057 container died f8c1bf84f35574c2c83f4be442db16df365416a15327eee0ad38ba8c6853864f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:42:36 np0005480824 systemd[1]: var-lib-containers-storage-overlay-f9339915d60fe9e26fb9e3a30a6a99728c7d30b06de8e9a7e88edd509017e6c6-merged.mount: Deactivated successfully.
Oct 10 23:42:36 np0005480824 podman[263828]: 2025-10-11 03:42:36.091886247 +0000 UTC m=+0.269252578 container remove f8c1bf84f35574c2c83f4be442db16df365416a15327eee0ad38ba8c6853864f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:42:36 np0005480824 systemd[1]: libpod-conmon-f8c1bf84f35574c2c83f4be442db16df365416a15327eee0ad38ba8c6853864f.scope: Deactivated successfully.
Oct 10 23:42:36 np0005480824 podman[263871]: 2025-10-11 03:42:36.287693231 +0000 UTC m=+0.054931149 container create 626e71a42cda9f3c162f6cc8dac66107bca6e2e80e89b6ab87b643778858e4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 10 23:42:36 np0005480824 systemd[1]: Started libpod-conmon-626e71a42cda9f3c162f6cc8dac66107bca6e2e80e89b6ab87b643778858e4b5.scope.
Oct 10 23:42:36 np0005480824 podman[263871]: 2025-10-11 03:42:36.257767374 +0000 UTC m=+0.025005372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:42:36 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:42:36 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e13cd78c16710f7150fbe631f128db6ce1015b6e697cbf69000af77411970d0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:42:36 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e13cd78c16710f7150fbe631f128db6ce1015b6e697cbf69000af77411970d0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:42:36 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e13cd78c16710f7150fbe631f128db6ce1015b6e697cbf69000af77411970d0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:42:36 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e13cd78c16710f7150fbe631f128db6ce1015b6e697cbf69000af77411970d0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:42:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:36 np0005480824 podman[263871]: 2025-10-11 03:42:36.378401942 +0000 UTC m=+0.145639870 container init 626e71a42cda9f3c162f6cc8dac66107bca6e2e80e89b6ab87b643778858e4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:42:36 np0005480824 podman[263871]: 2025-10-11 03:42:36.385022779 +0000 UTC m=+0.152260697 container start 626e71a42cda9f3c162f6cc8dac66107bca6e2e80e89b6ab87b643778858e4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:42:36 np0005480824 podman[263871]: 2025-10-11 03:42:36.388610174 +0000 UTC m=+0.155848092 container attach 626e71a42cda9f3c162f6cc8dac66107bca6e2e80e89b6ab87b643778858e4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]: {
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:    "0": [
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:        {
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "devices": [
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "/dev/loop3"
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            ],
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_name": "ceph_lv0",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_size": "21470642176",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "name": "ceph_lv0",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "tags": {
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.cluster_name": "ceph",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.crush_device_class": "",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.encrypted": "0",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.osd_id": "0",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.type": "block",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.vdo": "0"
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            },
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "type": "block",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "vg_name": "ceph_vg0"
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:        }
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:    ],
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:    "1": [
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:        {
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "devices": [
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "/dev/loop4"
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            ],
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_name": "ceph_lv1",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_size": "21470642176",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "name": "ceph_lv1",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "tags": {
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.cluster_name": "ceph",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.crush_device_class": "",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.encrypted": "0",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.osd_id": "1",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.type": "block",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.vdo": "0"
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            },
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "type": "block",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "vg_name": "ceph_vg1"
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:        }
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:    ],
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:    "2": [
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:        {
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "devices": [
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "/dev/loop5"
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            ],
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_name": "ceph_lv2",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_size": "21470642176",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "name": "ceph_lv2",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "tags": {
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.cluster_name": "ceph",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.crush_device_class": "",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.encrypted": "0",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.osd_id": "2",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.type": "block",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:                "ceph.vdo": "0"
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            },
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "type": "block",
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:            "vg_name": "ceph_vg2"
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:        }
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]:    ]
Oct 10 23:42:37 np0005480824 friendly_einstein[263888]: }
Oct 10 23:42:37 np0005480824 systemd[1]: libpod-626e71a42cda9f3c162f6cc8dac66107bca6e2e80e89b6ab87b643778858e4b5.scope: Deactivated successfully.
Oct 10 23:42:37 np0005480824 podman[263871]: 2025-10-11 03:42:37.111421852 +0000 UTC m=+0.878659780 container died 626e71a42cda9f3c162f6cc8dac66107bca6e2e80e89b6ab87b643778858e4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:42:37 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e13cd78c16710f7150fbe631f128db6ce1015b6e697cbf69000af77411970d0e-merged.mount: Deactivated successfully.
Oct 10 23:42:37 np0005480824 podman[263871]: 2025-10-11 03:42:37.177676846 +0000 UTC m=+0.944914764 container remove 626e71a42cda9f3c162f6cc8dac66107bca6e2e80e89b6ab87b643778858e4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:42:37 np0005480824 systemd[1]: libpod-conmon-626e71a42cda9f3c162f6cc8dac66107bca6e2e80e89b6ab87b643778858e4b5.scope: Deactivated successfully.
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:42:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:42:37 np0005480824 podman[264051]: 2025-10-11 03:42:37.98374631 +0000 UTC m=+0.063902051 container create 5497c6ffe6da1b4c690e7fdd98675bd15b2ff52cb410b102c83aea5f712b5147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:42:38 np0005480824 systemd[1]: Started libpod-conmon-5497c6ffe6da1b4c690e7fdd98675bd15b2ff52cb410b102c83aea5f712b5147.scope.
Oct 10 23:42:38 np0005480824 podman[264051]: 2025-10-11 03:42:37.960145142 +0000 UTC m=+0.040300923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:42:38 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:42:38 np0005480824 podman[264051]: 2025-10-11 03:42:38.091807791 +0000 UTC m=+0.171963552 container init 5497c6ffe6da1b4c690e7fdd98675bd15b2ff52cb410b102c83aea5f712b5147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_edison, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 23:42:38 np0005480824 podman[264051]: 2025-10-11 03:42:38.10574954 +0000 UTC m=+0.185905311 container start 5497c6ffe6da1b4c690e7fdd98675bd15b2ff52cb410b102c83aea5f712b5147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 23:42:38 np0005480824 podman[264051]: 2025-10-11 03:42:38.110939453 +0000 UTC m=+0.191095224 container attach 5497c6ffe6da1b4c690e7fdd98675bd15b2ff52cb410b102c83aea5f712b5147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:42:38 np0005480824 busy_edison[264067]: 167 167
Oct 10 23:42:38 np0005480824 systemd[1]: libpod-5497c6ffe6da1b4c690e7fdd98675bd15b2ff52cb410b102c83aea5f712b5147.scope: Deactivated successfully.
Oct 10 23:42:38 np0005480824 podman[264051]: 2025-10-11 03:42:38.118909612 +0000 UTC m=+0.199065353 container died 5497c6ffe6da1b4c690e7fdd98675bd15b2ff52cb410b102c83aea5f712b5147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_edison, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 10 23:42:38 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a77252b38ae1f50b79bcba9f8afffcb14d9a66f72bda98a604dcde07a3796967-merged.mount: Deactivated successfully.
Oct 10 23:42:38 np0005480824 podman[264051]: 2025-10-11 03:42:38.173027289 +0000 UTC m=+0.253183060 container remove 5497c6ffe6da1b4c690e7fdd98675bd15b2ff52cb410b102c83aea5f712b5147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:42:38 np0005480824 podman[264071]: 2025-10-11 03:42:38.19551687 +0000 UTC m=+0.094049082 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 23:42:38 np0005480824 podman[264070]: 2025-10-11 03:42:38.195401457 +0000 UTC m=+0.091165673 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 10 23:42:38 np0005480824 systemd[1]: libpod-conmon-5497c6ffe6da1b4c690e7fdd98675bd15b2ff52cb410b102c83aea5f712b5147.scope: Deactivated successfully.
Oct 10 23:42:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:38 np0005480824 podman[264129]: 2025-10-11 03:42:38.394004957 +0000 UTC m=+0.065646761 container create 1fbefb1937149b09d22bce21e8f3f94831304c983c6ad6d38cc46bbc3c62e90e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 10 23:42:38 np0005480824 systemd[1]: Started libpod-conmon-1fbefb1937149b09d22bce21e8f3f94831304c983c6ad6d38cc46bbc3c62e90e.scope.
Oct 10 23:42:38 np0005480824 podman[264129]: 2025-10-11 03:42:38.375527731 +0000 UTC m=+0.047169555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:42:38 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:42:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3360b5e7b56eb8948d04cb1b08d510592e8bc64b069cc36c900e2846c2e12ae5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:42:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3360b5e7b56eb8948d04cb1b08d510592e8bc64b069cc36c900e2846c2e12ae5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:42:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3360b5e7b56eb8948d04cb1b08d510592e8bc64b069cc36c900e2846c2e12ae5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:42:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3360b5e7b56eb8948d04cb1b08d510592e8bc64b069cc36c900e2846c2e12ae5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:42:38 np0005480824 podman[264129]: 2025-10-11 03:42:38.540371124 +0000 UTC m=+0.212013018 container init 1fbefb1937149b09d22bce21e8f3f94831304c983c6ad6d38cc46bbc3c62e90e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:42:38 np0005480824 podman[264129]: 2025-10-11 03:42:38.554690622 +0000 UTC m=+0.226332456 container start 1fbefb1937149b09d22bce21e8f3f94831304c983c6ad6d38cc46bbc3c62e90e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rhodes, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 10 23:42:38 np0005480824 podman[264129]: 2025-10-11 03:42:38.558649215 +0000 UTC m=+0.230291109 container attach 1fbefb1937149b09d22bce21e8f3f94831304c983c6ad6d38cc46bbc3c62e90e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rhodes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 10 23:42:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]: {
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "osd_id": 0,
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "type": "bluestore"
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:    },
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "osd_id": 1,
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "type": "bluestore"
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:    },
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "osd_id": 2,
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:        "type": "bluestore"
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]:    }
Oct 10 23:42:39 np0005480824 cool_rhodes[264146]: }
Oct 10 23:42:39 np0005480824 systemd[1]: libpod-1fbefb1937149b09d22bce21e8f3f94831304c983c6ad6d38cc46bbc3c62e90e.scope: Deactivated successfully.
Oct 10 23:42:39 np0005480824 podman[264129]: 2025-10-11 03:42:39.659621292 +0000 UTC m=+1.331263156 container died 1fbefb1937149b09d22bce21e8f3f94831304c983c6ad6d38cc46bbc3c62e90e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 10 23:42:39 np0005480824 systemd[1]: libpod-1fbefb1937149b09d22bce21e8f3f94831304c983c6ad6d38cc46bbc3c62e90e.scope: Consumed 1.118s CPU time.
Oct 10 23:42:39 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3360b5e7b56eb8948d04cb1b08d510592e8bc64b069cc36c900e2846c2e12ae5-merged.mount: Deactivated successfully.
Oct 10 23:42:39 np0005480824 podman[264129]: 2025-10-11 03:42:39.721201696 +0000 UTC m=+1.392843500 container remove 1fbefb1937149b09d22bce21e8f3f94831304c983c6ad6d38cc46bbc3c62e90e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_rhodes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:42:39 np0005480824 systemd[1]: libpod-conmon-1fbefb1937149b09d22bce21e8f3f94831304c983c6ad6d38cc46bbc3c62e90e.scope: Deactivated successfully.
Oct 10 23:42:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:42:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:42:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:42:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:42:39 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 80ab5bfa-766c-46b2-a7fe-b1f96bc2ce70 does not exist
Oct 10 23:42:39 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 806103df-f4de-46b3-95b4-519567c329c6 does not exist
Oct 10 23:42:39 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:42:39 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:42:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:42:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:47 np0005480824 systemd[1]: packagekit.service: Deactivated successfully.
Oct 10 23:42:47 np0005480824 podman[264241]: 2025-10-11 03:42:47.496563635 +0000 UTC m=+0.152493032 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 23:42:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:42:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:54 np0005480824 podman[264268]: 2025-10-11 03:42:54.039059363 +0000 UTC m=+0.094639497 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 10 23:42:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:42:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:42:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:42:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:42:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:42:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:42:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:42:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:42:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:43:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:43:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:09 np0005480824 podman[264287]: 2025-10-11 03:43:09.044447006 +0000 UTC m=+0.092917724 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:43:09 np0005480824 podman[264288]: 2025-10-11 03:43:09.051272788 +0000 UTC m=+0.100453124 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid)
Oct 10 23:43:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:43:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:43:10.482 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:43:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:43:10.483 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:43:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:43:10.483 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:43:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:43:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:18 np0005480824 podman[264328]: 2025-10-11 03:43:18.095541681 +0000 UTC m=+0.133966714 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 23:43:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:43:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:23 np0005480824 nova_compute[260089]: 2025-10-11 03:43:23.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:43:23 np0005480824 nova_compute[260089]: 2025-10-11 03:43:23.314 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:43:23 np0005480824 nova_compute[260089]: 2025-10-11 03:43:23.314 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:43:24 np0005480824 nova_compute[260089]: 2025-10-11 03:43:24.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:43:24 np0005480824 nova_compute[260089]: 2025-10-11 03:43:24.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:43:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:43:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:43:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1355278045' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:43:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:43:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1355278045' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:43:25 np0005480824 podman[264353]: 2025-10-11 03:43:25.02853826 +0000 UTC m=+0.081286550 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:43:25 np0005480824 nova_compute[260089]: 2025-10-11 03:43:25.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:43:25 np0005480824 nova_compute[260089]: 2025-10-11 03:43:25.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:43:25 np0005480824 nova_compute[260089]: 2025-10-11 03:43:25.298 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:43:25 np0005480824 nova_compute[260089]: 2025-10-11 03:43:25.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:43:25 np0005480824 nova_compute[260089]: 2025-10-11 03:43:25.331 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:43:25 np0005480824 nova_compute[260089]: 2025-10-11 03:43:25.331 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:43:25 np0005480824 nova_compute[260089]: 2025-10-11 03:43:25.332 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:43:25 np0005480824 nova_compute[260089]: 2025-10-11 03:43:25.332 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:43:25 np0005480824 nova_compute[260089]: 2025-10-11 03:43:25.333 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:43:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:43:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/505571482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:43:25 np0005480824 nova_compute[260089]: 2025-10-11 03:43:25.812 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:43:25 np0005480824 nova_compute[260089]: 2025-10-11 03:43:25.975 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:43:25 np0005480824 nova_compute[260089]: 2025-10-11 03:43:25.977 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5191MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:43:25 np0005480824 nova_compute[260089]: 2025-10-11 03:43:25.977 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:43:25 np0005480824 nova_compute[260089]: 2025-10-11 03:43:25.977 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:43:26 np0005480824 nova_compute[260089]: 2025-10-11 03:43:26.051 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:43:26 np0005480824 nova_compute[260089]: 2025-10-11 03:43:26.052 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:43:26 np0005480824 nova_compute[260089]: 2025-10-11 03:43:26.066 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:43:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:43:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068995950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:43:26 np0005480824 nova_compute[260089]: 2025-10-11 03:43:26.457 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:43:26 np0005480824 nova_compute[260089]: 2025-10-11 03:43:26.463 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:43:26 np0005480824 nova_compute[260089]: 2025-10-11 03:43:26.491 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:43:26 np0005480824 nova_compute[260089]: 2025-10-11 03:43:26.494 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:43:26 np0005480824 nova_compute[260089]: 2025-10-11 03:43:26.495 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:43:27 np0005480824 nova_compute[260089]: 2025-10-11 03:43:27.496 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:43:27 np0005480824 nova_compute[260089]: 2025-10-11 03:43:27.496 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:43:27 np0005480824 nova_compute[260089]: 2025-10-11 03:43:27.497 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:43:27 np0005480824 nova_compute[260089]: 2025-10-11 03:43:27.515 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 23:43:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:43:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:43:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:43:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:43:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:43:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:43:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:43:27
Oct 10 23:43:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:43:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:43:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'backups', 'volumes', 'default.rgw.meta']
Oct 10 23:43:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:43:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:43:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:43:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:43:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:43:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:43:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:43:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:43:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:43:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:43:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:43:28 np0005480824 nova_compute[260089]: 2025-10-11 03:43:28.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:43:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:43:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:43:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:43:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:43:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:43:40 np0005480824 podman[264416]: 2025-10-11 03:43:40.032996443 +0000 UTC m=+0.089444413 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct 10 23:43:40 np0005480824 podman[264417]: 2025-10-11 03:43:40.053577258 +0000 UTC m=+0.095288281 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:43:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:43:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:43:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:43:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:43:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:43:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:43:40 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 00ce44a5-fe63-478b-842d-e3ba259dd507 does not exist
Oct 10 23:43:40 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 83dccaaa-ab71-4152-ae92-66355f7dba58 does not exist
Oct 10 23:43:40 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev b488a0c2-d041-4d77-9725-b17b57db5ae7 does not exist
Oct 10 23:43:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:43:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:43:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:43:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:43:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:43:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:43:41 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:43:41 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:43:41 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:43:41 np0005480824 podman[264727]: 2025-10-11 03:43:41.840718588 +0000 UTC m=+0.065773924 container create a2e01d26dde3a06c33e357dcdf02c8b3e3fe105d94987a9260834c34a7377cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 10 23:43:41 np0005480824 systemd[1]: Started libpod-conmon-a2e01d26dde3a06c33e357dcdf02c8b3e3fe105d94987a9260834c34a7377cd6.scope.
Oct 10 23:43:41 np0005480824 podman[264727]: 2025-10-11 03:43:41.808959968 +0000 UTC m=+0.034015364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:43:41 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:43:41 np0005480824 podman[264727]: 2025-10-11 03:43:41.960885715 +0000 UTC m=+0.185941111 container init a2e01d26dde3a06c33e357dcdf02c8b3e3fe105d94987a9260834c34a7377cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:43:41 np0005480824 podman[264727]: 2025-10-11 03:43:41.972992471 +0000 UTC m=+0.198047827 container start a2e01d26dde3a06c33e357dcdf02c8b3e3fe105d94987a9260834c34a7377cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 23:43:41 np0005480824 podman[264727]: 2025-10-11 03:43:41.977730523 +0000 UTC m=+0.202785929 container attach a2e01d26dde3a06c33e357dcdf02c8b3e3fe105d94987a9260834c34a7377cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 23:43:41 np0005480824 pensive_hopper[264743]: 167 167
Oct 10 23:43:41 np0005480824 systemd[1]: libpod-a2e01d26dde3a06c33e357dcdf02c8b3e3fe105d94987a9260834c34a7377cd6.scope: Deactivated successfully.
Oct 10 23:43:41 np0005480824 podman[264727]: 2025-10-11 03:43:41.985500867 +0000 UTC m=+0.210556223 container died a2e01d26dde3a06c33e357dcdf02c8b3e3fe105d94987a9260834c34a7377cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:43:42 np0005480824 systemd[1]: var-lib-containers-storage-overlay-39ae3b195950a28b362079b221ee4eceef53a3746169931f772eca1b989e1107-merged.mount: Deactivated successfully.
Oct 10 23:43:42 np0005480824 podman[264727]: 2025-10-11 03:43:42.046415556 +0000 UTC m=+0.271470922 container remove a2e01d26dde3a06c33e357dcdf02c8b3e3fe105d94987a9260834c34a7377cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hopper, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:43:42 np0005480824 systemd[1]: libpod-conmon-a2e01d26dde3a06c33e357dcdf02c8b3e3fe105d94987a9260834c34a7377cd6.scope: Deactivated successfully.
Oct 10 23:43:42 np0005480824 podman[264767]: 2025-10-11 03:43:42.306111777 +0000 UTC m=+0.081682559 container create e959be5e24bab89d7d286d7ef716269ba7ca55085aa19f51cbb7d47049815206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:43:42 np0005480824 systemd[1]: Started libpod-conmon-e959be5e24bab89d7d286d7ef716269ba7ca55085aa19f51cbb7d47049815206.scope.
Oct 10 23:43:42 np0005480824 podman[264767]: 2025-10-11 03:43:42.272738519 +0000 UTC m=+0.048309341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:43:42 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:43:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58c72fe52795b3159a6ce59d4ea1d5983d5d6e37a5ce2741ef409baf51749e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:43:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58c72fe52795b3159a6ce59d4ea1d5983d5d6e37a5ce2741ef409baf51749e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:43:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58c72fe52795b3159a6ce59d4ea1d5983d5d6e37a5ce2741ef409baf51749e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:43:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58c72fe52795b3159a6ce59d4ea1d5983d5d6e37a5ce2741ef409baf51749e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:43:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58c72fe52795b3159a6ce59d4ea1d5983d5d6e37a5ce2741ef409baf51749e5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:43:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:42 np0005480824 podman[264767]: 2025-10-11 03:43:42.422470295 +0000 UTC m=+0.198041047 container init e959be5e24bab89d7d286d7ef716269ba7ca55085aa19f51cbb7d47049815206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 23:43:42 np0005480824 podman[264767]: 2025-10-11 03:43:42.439498937 +0000 UTC m=+0.215069729 container start e959be5e24bab89d7d286d7ef716269ba7ca55085aa19f51cbb7d47049815206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cohen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:43:42 np0005480824 podman[264767]: 2025-10-11 03:43:42.443607264 +0000 UTC m=+0.219178016 container attach e959be5e24bab89d7d286d7ef716269ba7ca55085aa19f51cbb7d47049815206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:43:43 np0005480824 keen_cohen[264784]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:43:43 np0005480824 keen_cohen[264784]: --> relative data size: 1.0
Oct 10 23:43:43 np0005480824 keen_cohen[264784]: --> All data devices are unavailable
Oct 10 23:43:43 np0005480824 systemd[1]: libpod-e959be5e24bab89d7d286d7ef716269ba7ca55085aa19f51cbb7d47049815206.scope: Deactivated successfully.
Oct 10 23:43:43 np0005480824 systemd[1]: libpod-e959be5e24bab89d7d286d7ef716269ba7ca55085aa19f51cbb7d47049815206.scope: Consumed 1.293s CPU time.
Oct 10 23:43:43 np0005480824 podman[264767]: 2025-10-11 03:43:43.779367505 +0000 UTC m=+1.554938297 container died e959be5e24bab89d7d286d7ef716269ba7ca55085aa19f51cbb7d47049815206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cohen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:43:43 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a58c72fe52795b3159a6ce59d4ea1d5983d5d6e37a5ce2741ef409baf51749e5-merged.mount: Deactivated successfully.
Oct 10 23:43:43 np0005480824 podman[264767]: 2025-10-11 03:43:43.856463696 +0000 UTC m=+1.632034448 container remove e959be5e24bab89d7d286d7ef716269ba7ca55085aa19f51cbb7d47049815206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cohen, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:43:43 np0005480824 systemd[1]: libpod-conmon-e959be5e24bab89d7d286d7ef716269ba7ca55085aa19f51cbb7d47049815206.scope: Deactivated successfully.
Oct 10 23:43:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:43:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:44 np0005480824 podman[264968]: 2025-10-11 03:43:44.74852925 +0000 UTC m=+0.069732517 container create 62d40fa52ce613e68a696aab63671000396b313089282d83c3c1f40e322fa6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:43:44 np0005480824 systemd[1]: Started libpod-conmon-62d40fa52ce613e68a696aab63671000396b313089282d83c3c1f40e322fa6db.scope.
Oct 10 23:43:44 np0005480824 podman[264968]: 2025-10-11 03:43:44.720397596 +0000 UTC m=+0.041600913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:43:44 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:43:44 np0005480824 podman[264968]: 2025-10-11 03:43:44.858438946 +0000 UTC m=+0.179642263 container init 62d40fa52ce613e68a696aab63671000396b313089282d83c3c1f40e322fa6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 10 23:43:44 np0005480824 podman[264968]: 2025-10-11 03:43:44.866961247 +0000 UTC m=+0.188164514 container start 62d40fa52ce613e68a696aab63671000396b313089282d83c3c1f40e322fa6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:43:44 np0005480824 podman[264968]: 2025-10-11 03:43:44.870778787 +0000 UTC m=+0.191982064 container attach 62d40fa52ce613e68a696aab63671000396b313089282d83c3c1f40e322fa6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:43:44 np0005480824 ecstatic_varahamihira[264984]: 167 167
Oct 10 23:43:44 np0005480824 systemd[1]: libpod-62d40fa52ce613e68a696aab63671000396b313089282d83c3c1f40e322fa6db.scope: Deactivated successfully.
Oct 10 23:43:44 np0005480824 podman[264968]: 2025-10-11 03:43:44.875837556 +0000 UTC m=+0.197040833 container died 62d40fa52ce613e68a696aab63671000396b313089282d83c3c1f40e322fa6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 10 23:43:44 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e3ddd21d5a261ad8e8b27e23918cbb2db61ed9ff492fea8f40cc2f31815befc7-merged.mount: Deactivated successfully.
Oct 10 23:43:44 np0005480824 podman[264968]: 2025-10-11 03:43:44.934052651 +0000 UTC m=+0.255255918 container remove 62d40fa52ce613e68a696aab63671000396b313089282d83c3c1f40e322fa6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:43:44 np0005480824 systemd[1]: libpod-conmon-62d40fa52ce613e68a696aab63671000396b313089282d83c3c1f40e322fa6db.scope: Deactivated successfully.
Oct 10 23:43:45 np0005480824 podman[265008]: 2025-10-11 03:43:45.232529539 +0000 UTC m=+0.080245766 container create 9bf89feb14cea0e2382c4afca0455098affc10a051630dddfb86f4402cdbdf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_beaver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:43:45 np0005480824 podman[265008]: 2025-10-11 03:43:45.201571748 +0000 UTC m=+0.049288025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:43:45 np0005480824 systemd[1]: Started libpod-conmon-9bf89feb14cea0e2382c4afca0455098affc10a051630dddfb86f4402cdbdf0d.scope.
Oct 10 23:43:45 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:43:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/944c68268cd94ba325106e626dc07b828f7b65b0248df0d95045b632c7c2cef1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:43:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/944c68268cd94ba325106e626dc07b828f7b65b0248df0d95045b632c7c2cef1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:43:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/944c68268cd94ba325106e626dc07b828f7b65b0248df0d95045b632c7c2cef1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:43:45 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/944c68268cd94ba325106e626dc07b828f7b65b0248df0d95045b632c7c2cef1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:43:45 np0005480824 podman[265008]: 2025-10-11 03:43:45.361398852 +0000 UTC m=+0.209115129 container init 9bf89feb14cea0e2382c4afca0455098affc10a051630dddfb86f4402cdbdf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_beaver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:43:45 np0005480824 podman[265008]: 2025-10-11 03:43:45.370183559 +0000 UTC m=+0.217899786 container start 9bf89feb14cea0e2382c4afca0455098affc10a051630dddfb86f4402cdbdf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_beaver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:43:45 np0005480824 podman[265008]: 2025-10-11 03:43:45.375750901 +0000 UTC m=+0.223467188 container attach 9bf89feb14cea0e2382c4afca0455098affc10a051630dddfb86f4402cdbdf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]: {
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:    "0": [
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:        {
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "devices": [
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "/dev/loop3"
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            ],
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_name": "ceph_lv0",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_size": "21470642176",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "name": "ceph_lv0",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "tags": {
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.cluster_name": "ceph",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.crush_device_class": "",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.encrypted": "0",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.osd_id": "0",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.type": "block",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.vdo": "0"
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            },
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "type": "block",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "vg_name": "ceph_vg0"
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:        }
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:    ],
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:    "1": [
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:        {
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "devices": [
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "/dev/loop4"
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            ],
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_name": "ceph_lv1",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_size": "21470642176",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "name": "ceph_lv1",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "tags": {
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.cluster_name": "ceph",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.crush_device_class": "",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.encrypted": "0",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.osd_id": "1",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.type": "block",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.vdo": "0"
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            },
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "type": "block",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "vg_name": "ceph_vg1"
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:        }
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:    ],
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:    "2": [
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:        {
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "devices": [
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "/dev/loop5"
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            ],
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_name": "ceph_lv2",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_size": "21470642176",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "name": "ceph_lv2",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "tags": {
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.cluster_name": "ceph",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.crush_device_class": "",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.encrypted": "0",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.osd_id": "2",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.type": "block",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:                "ceph.vdo": "0"
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            },
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "type": "block",
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:            "vg_name": "ceph_vg2"
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:        }
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]:    ]
Oct 10 23:43:46 np0005480824 priceless_beaver[265025]: }
Oct 10 23:43:46 np0005480824 systemd[1]: libpod-9bf89feb14cea0e2382c4afca0455098affc10a051630dddfb86f4402cdbdf0d.scope: Deactivated successfully.
Oct 10 23:43:46 np0005480824 podman[265008]: 2025-10-11 03:43:46.252473874 +0000 UTC m=+1.100190091 container died 9bf89feb14cea0e2382c4afca0455098affc10a051630dddfb86f4402cdbdf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_beaver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 23:43:46 np0005480824 systemd[1]: var-lib-containers-storage-overlay-944c68268cd94ba325106e626dc07b828f7b65b0248df0d95045b632c7c2cef1-merged.mount: Deactivated successfully.
Oct 10 23:43:46 np0005480824 podman[265008]: 2025-10-11 03:43:46.347448125 +0000 UTC m=+1.195164352 container remove 9bf89feb14cea0e2382c4afca0455098affc10a051630dddfb86f4402cdbdf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:43:46 np0005480824 systemd[1]: libpod-conmon-9bf89feb14cea0e2382c4afca0455098affc10a051630dddfb86f4402cdbdf0d.scope: Deactivated successfully.
Oct 10 23:43:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:47 np0005480824 podman[265184]: 2025-10-11 03:43:47.28990901 +0000 UTC m=+0.067647288 container create ab0cf19abe9c062d40e905ea3da335a092c3f6842bd51baf0c5315976a000fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 23:43:47 np0005480824 systemd[1]: Started libpod-conmon-ab0cf19abe9c062d40e905ea3da335a092c3f6842bd51baf0c5315976a000fb4.scope.
Oct 10 23:43:47 np0005480824 podman[265184]: 2025-10-11 03:43:47.268948336 +0000 UTC m=+0.046686644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:43:47 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:43:47 np0005480824 podman[265184]: 2025-10-11 03:43:47.389915902 +0000 UTC m=+0.167654240 container init ab0cf19abe9c062d40e905ea3da335a092c3f6842bd51baf0c5315976a000fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 10 23:43:47 np0005480824 podman[265184]: 2025-10-11 03:43:47.398908844 +0000 UTC m=+0.176647122 container start ab0cf19abe9c062d40e905ea3da335a092c3f6842bd51baf0c5315976a000fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 23:43:47 np0005480824 podman[265184]: 2025-10-11 03:43:47.403658806 +0000 UTC m=+0.181397164 container attach ab0cf19abe9c062d40e905ea3da335a092c3f6842bd51baf0c5315976a000fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 10 23:43:47 np0005480824 unruffled_vaughan[265200]: 167 167
Oct 10 23:43:47 np0005480824 systemd[1]: libpod-ab0cf19abe9c062d40e905ea3da335a092c3f6842bd51baf0c5315976a000fb4.scope: Deactivated successfully.
Oct 10 23:43:47 np0005480824 podman[265184]: 2025-10-11 03:43:47.405169132 +0000 UTC m=+0.182907410 container died ab0cf19abe9c062d40e905ea3da335a092c3f6842bd51baf0c5315976a000fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:43:47 np0005480824 systemd[1]: var-lib-containers-storage-overlay-fd5a659b5c5d89703c8e16fc6e731ce8552fbf68bc71486e620db6a06db30ffe-merged.mount: Deactivated successfully.
Oct 10 23:43:47 np0005480824 podman[265184]: 2025-10-11 03:43:47.450002791 +0000 UTC m=+0.227741089 container remove ab0cf19abe9c062d40e905ea3da335a092c3f6842bd51baf0c5315976a000fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:43:47 np0005480824 systemd[1]: libpod-conmon-ab0cf19abe9c062d40e905ea3da335a092c3f6842bd51baf0c5315976a000fb4.scope: Deactivated successfully.
Oct 10 23:43:47 np0005480824 podman[265225]: 2025-10-11 03:43:47.714372253 +0000 UTC m=+0.076346994 container create ae9f4482e151c87080789a4ecf33443be094ea162cddc7256b215bc6c7ffc6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:43:47 np0005480824 systemd[1]: Started libpod-conmon-ae9f4482e151c87080789a4ecf33443be094ea162cddc7256b215bc6c7ffc6af.scope.
Oct 10 23:43:47 np0005480824 podman[265225]: 2025-10-11 03:43:47.681970388 +0000 UTC m=+0.043945209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:43:47 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:43:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10b76e4d5c02c114d6ab5e337e0e800890ea928867784e0af45819b45e3c4b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:43:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10b76e4d5c02c114d6ab5e337e0e800890ea928867784e0af45819b45e3c4b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:43:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10b76e4d5c02c114d6ab5e337e0e800890ea928867784e0af45819b45e3c4b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:43:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10b76e4d5c02c114d6ab5e337e0e800890ea928867784e0af45819b45e3c4b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:43:47 np0005480824 podman[265225]: 2025-10-11 03:43:47.879859151 +0000 UTC m=+0.241833902 container init ae9f4482e151c87080789a4ecf33443be094ea162cddc7256b215bc6c7ffc6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:43:47 np0005480824 podman[265225]: 2025-10-11 03:43:47.891212559 +0000 UTC m=+0.253187290 container start ae9f4482e151c87080789a4ecf33443be094ea162cddc7256b215bc6c7ffc6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kare, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:43:47 np0005480824 podman[265225]: 2025-10-11 03:43:47.894528427 +0000 UTC m=+0.256503158 container attach ae9f4482e151c87080789a4ecf33443be094ea162cddc7256b215bc6c7ffc6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 10 23:43:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:48 np0005480824 clever_kare[265241]: {
Oct 10 23:43:48 np0005480824 clever_kare[265241]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "osd_id": 0,
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "type": "bluestore"
Oct 10 23:43:48 np0005480824 clever_kare[265241]:    },
Oct 10 23:43:48 np0005480824 clever_kare[265241]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "osd_id": 1,
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "type": "bluestore"
Oct 10 23:43:48 np0005480824 clever_kare[265241]:    },
Oct 10 23:43:48 np0005480824 clever_kare[265241]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "osd_id": 2,
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:43:48 np0005480824 clever_kare[265241]:        "type": "bluestore"
Oct 10 23:43:48 np0005480824 clever_kare[265241]:    }
Oct 10 23:43:48 np0005480824 clever_kare[265241]: }
Oct 10 23:43:49 np0005480824 systemd[1]: libpod-ae9f4482e151c87080789a4ecf33443be094ea162cddc7256b215bc6c7ffc6af.scope: Deactivated successfully.
Oct 10 23:43:49 np0005480824 podman[265225]: 2025-10-11 03:43:49.022747888 +0000 UTC m=+1.384722659 container died ae9f4482e151c87080789a4ecf33443be094ea162cddc7256b215bc6c7ffc6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kare, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:43:49 np0005480824 systemd[1]: libpod-ae9f4482e151c87080789a4ecf33443be094ea162cddc7256b215bc6c7ffc6af.scope: Consumed 1.143s CPU time.
Oct 10 23:43:49 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a10b76e4d5c02c114d6ab5e337e0e800890ea928867784e0af45819b45e3c4b6-merged.mount: Deactivated successfully.
Oct 10 23:43:49 np0005480824 podman[265269]: 2025-10-11 03:43:49.097231166 +0000 UTC m=+0.159128128 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 10 23:43:49 np0005480824 podman[265225]: 2025-10-11 03:43:49.132020358 +0000 UTC m=+1.493995089 container remove ae9f4482e151c87080789a4ecf33443be094ea162cddc7256b215bc6c7ffc6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:43:49 np0005480824 systemd[1]: libpod-conmon-ae9f4482e151c87080789a4ecf33443be094ea162cddc7256b215bc6c7ffc6af.scope: Deactivated successfully.
Oct 10 23:43:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:43:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:43:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:43:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:43:49 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 4815b7fb-b138-4c5c-a883-576a1952ce37 does not exist
Oct 10 23:43:49 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 2e762346-742e-46b0-b7c3-7ebb95f799ac does not exist
Oct 10 23:43:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:43:49 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:43:49 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:43:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:43:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:56 np0005480824 podman[265362]: 2025-10-11 03:43:55.998968388 +0000 UTC m=+0.059181049 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 23:43:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:43:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:43:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:43:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:43:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:43:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:43:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:43:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:44:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:44:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:44:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:44:10.483 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:44:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:44:10.484 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:44:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:44:10.484 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:44:11 np0005480824 podman[265384]: 2025-10-11 03:44:11.058091448 +0000 UTC m=+0.100122954 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 23:44:11 np0005480824 podman[265383]: 2025-10-11 03:44:11.080721293 +0000 UTC m=+0.127696027 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:44:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:44:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:44:20 np0005480824 podman[265423]: 2025-10-11 03:44:20.080561138 +0000 UTC m=+0.130651926 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct 10 23:44:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:23 np0005480824 nova_compute[260089]: 2025-10-11 03:44:23.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:44:23 np0005480824 nova_compute[260089]: 2025-10-11 03:44:23.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:44:23 np0005480824 nova_compute[260089]: 2025-10-11 03:44:23.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 10 23:44:23 np0005480824 nova_compute[260089]: 2025-10-11 03:44:23.313 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 10 23:44:23 np0005480824 nova_compute[260089]: 2025-10-11 03:44:23.315 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:44:23 np0005480824 nova_compute[260089]: 2025-10-11 03:44:23.316 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 10 23:44:23 np0005480824 nova_compute[260089]: 2025-10-11 03:44:23.327 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:44:24 np0005480824 nova_compute[260089]: 2025-10-11 03:44:24.341 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:44:24 np0005480824 nova_compute[260089]: 2025-10-11 03:44:24.342 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:44:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:44:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:44:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4016439908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:44:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:44:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4016439908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:44:25 np0005480824 nova_compute[260089]: 2025-10-11 03:44:25.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:44:25 np0005480824 nova_compute[260089]: 2025-10-11 03:44:25.322 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:44:25 np0005480824 nova_compute[260089]: 2025-10-11 03:44:25.323 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:44:25 np0005480824 nova_compute[260089]: 2025-10-11 03:44:25.323 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:44:25 np0005480824 nova_compute[260089]: 2025-10-11 03:44:25.323 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:44:25 np0005480824 nova_compute[260089]: 2025-10-11 03:44:25.324 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:44:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:44:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2650045609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:44:25 np0005480824 nova_compute[260089]: 2025-10-11 03:44:25.811 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:44:26 np0005480824 nova_compute[260089]: 2025-10-11 03:44:26.022 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:44:26 np0005480824 nova_compute[260089]: 2025-10-11 03:44:26.025 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5171MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:44:26 np0005480824 nova_compute[260089]: 2025-10-11 03:44:26.025 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:44:26 np0005480824 nova_compute[260089]: 2025-10-11 03:44:26.025 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:44:26 np0005480824 nova_compute[260089]: 2025-10-11 03:44:26.290 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:44:26 np0005480824 nova_compute[260089]: 2025-10-11 03:44:26.291 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:44:26 np0005480824 nova_compute[260089]: 2025-10-11 03:44:26.393 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing inventories for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 10 23:44:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:26 np0005480824 nova_compute[260089]: 2025-10-11 03:44:26.507 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updating ProviderTree inventory for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 10 23:44:26 np0005480824 nova_compute[260089]: 2025-10-11 03:44:26.508 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updating inventory in ProviderTree for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 23:44:26 np0005480824 nova_compute[260089]: 2025-10-11 03:44:26.537 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing aggregate associations for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 10 23:44:26 np0005480824 nova_compute[260089]: 2025-10-11 03:44:26.564 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing trait associations for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX,HW_CPU_X86_SHA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 10 23:44:26 np0005480824 nova_compute[260089]: 2025-10-11 03:44:26.590 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:44:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:44:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/103702852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:44:27 np0005480824 podman[265491]: 2025-10-11 03:44:27.031704036 +0000 UTC m=+0.086005172 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 23:44:27 np0005480824 nova_compute[260089]: 2025-10-11 03:44:27.048 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:44:27 np0005480824 nova_compute[260089]: 2025-10-11 03:44:27.057 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:44:27 np0005480824 nova_compute[260089]: 2025-10-11 03:44:27.081 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:44:27 np0005480824 nova_compute[260089]: 2025-10-11 03:44:27.085 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:44:27 np0005480824 nova_compute[260089]: 2025-10-11 03:44:27.085 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:44:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:44:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:44:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:44:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:44:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:44:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:44:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:44:27
Oct 10 23:44:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:44:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:44:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.meta']
Oct 10 23:44:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:44:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:44:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:44:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:44:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:44:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:44:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:44:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:44:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:44:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:44:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:44:28 np0005480824 nova_compute[260089]: 2025-10-11 03:44:28.086 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:44:28 np0005480824 nova_compute[260089]: 2025-10-11 03:44:28.086 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:44:28 np0005480824 nova_compute[260089]: 2025-10-11 03:44:28.086 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:44:28 np0005480824 nova_compute[260089]: 2025-10-11 03:44:28.100 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 23:44:28 np0005480824 nova_compute[260089]: 2025-10-11 03:44:28.101 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:44:28 np0005480824 nova_compute[260089]: 2025-10-11 03:44:28.101 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:44:28 np0005480824 nova_compute[260089]: 2025-10-11 03:44:28.102 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:44:28 np0005480824 nova_compute[260089]: 2025-10-11 03:44:28.102 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:44:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:29 np0005480824 nova_compute[260089]: 2025-10-11 03:44:29.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:44:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:44:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:44:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:44:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:44:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:44:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:42 np0005480824 podman[265516]: 2025-10-11 03:44:42.045932259 +0000 UTC m=+0.088369827 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid)
Oct 10 23:44:42 np0005480824 podman[265515]: 2025-10-11 03:44:42.054212915 +0000 UTC m=+0.063958792 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 10 23:44:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:44:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:44:50 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 8fd4409c-6cfa-43c1-a8b7-83e4c263e616 does not exist
Oct 10 23:44:50 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev cdb8b899-0695-4d14-91a6-63a97b5c53c1 does not exist
Oct 10 23:44:50 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev d8b5e95a-3b31-4b6f-9026-fa71b1d5cb8f does not exist
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:44:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:50 np0005480824 podman[265710]: 2025-10-11 03:44:50.472044304 +0000 UTC m=+0.135989102 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:44:50 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:44:51 np0005480824 podman[265855]: 2025-10-11 03:44:51.013392808 +0000 UTC m=+0.064431543 container create db0f3941eb679897b7f0899d7c3e955a635b44e7349cfd682a8a631fd44a2317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 23:44:51 np0005480824 systemd[1]: Started libpod-conmon-db0f3941eb679897b7f0899d7c3e955a635b44e7349cfd682a8a631fd44a2317.scope.
Oct 10 23:44:51 np0005480824 podman[265855]: 2025-10-11 03:44:50.983318947 +0000 UTC m=+0.034357752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:44:51 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:44:51 np0005480824 podman[265855]: 2025-10-11 03:44:51.126294643 +0000 UTC m=+0.177333408 container init db0f3941eb679897b7f0899d7c3e955a635b44e7349cfd682a8a631fd44a2317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 10 23:44:51 np0005480824 podman[265855]: 2025-10-11 03:44:51.141961804 +0000 UTC m=+0.193000579 container start db0f3941eb679897b7f0899d7c3e955a635b44e7349cfd682a8a631fd44a2317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_proskuriakova, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:44:51 np0005480824 podman[265855]: 2025-10-11 03:44:51.146845609 +0000 UTC m=+0.197884434 container attach db0f3941eb679897b7f0899d7c3e955a635b44e7349cfd682a8a631fd44a2317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_proskuriakova, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:44:51 np0005480824 keen_proskuriakova[265872]: 167 167
Oct 10 23:44:51 np0005480824 systemd[1]: libpod-db0f3941eb679897b7f0899d7c3e955a635b44e7349cfd682a8a631fd44a2317.scope: Deactivated successfully.
Oct 10 23:44:51 np0005480824 podman[265855]: 2025-10-11 03:44:51.15322374 +0000 UTC m=+0.204262465 container died db0f3941eb679897b7f0899d7c3e955a635b44e7349cfd682a8a631fd44a2317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:44:51 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c5b6e2cc3615e9a070d4564589f91e490f4cad9a739ae0b8ad29aa59c01e3b1e-merged.mount: Deactivated successfully.
Oct 10 23:44:51 np0005480824 podman[265855]: 2025-10-11 03:44:51.221330587 +0000 UTC m=+0.272369342 container remove db0f3941eb679897b7f0899d7c3e955a635b44e7349cfd682a8a631fd44a2317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 10 23:44:51 np0005480824 systemd[1]: libpod-conmon-db0f3941eb679897b7f0899d7c3e955a635b44e7349cfd682a8a631fd44a2317.scope: Deactivated successfully.
Oct 10 23:44:51 np0005480824 podman[265896]: 2025-10-11 03:44:51.473267136 +0000 UTC m=+0.065577889 container create 42a1e3d57b4f651c1d1f89c09be2f029175a2fb3e798dd336e38f688d04303e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:44:51 np0005480824 podman[265896]: 2025-10-11 03:44:51.442485649 +0000 UTC m=+0.034796462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:44:51 np0005480824 systemd[1]: Started libpod-conmon-42a1e3d57b4f651c1d1f89c09be2f029175a2fb3e798dd336e38f688d04303e6.scope.
Oct 10 23:44:51 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:44:51 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6466238e8e8d9d8ba83b8e17527fd793b7337b306e5d1309e3aef8b64ee341cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:44:51 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6466238e8e8d9d8ba83b8e17527fd793b7337b306e5d1309e3aef8b64ee341cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:44:51 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6466238e8e8d9d8ba83b8e17527fd793b7337b306e5d1309e3aef8b64ee341cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:44:51 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6466238e8e8d9d8ba83b8e17527fd793b7337b306e5d1309e3aef8b64ee341cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:44:51 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6466238e8e8d9d8ba83b8e17527fd793b7337b306e5d1309e3aef8b64ee341cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:44:51 np0005480824 podman[265896]: 2025-10-11 03:44:51.59496699 +0000 UTC m=+0.187277723 container init 42a1e3d57b4f651c1d1f89c09be2f029175a2fb3e798dd336e38f688d04303e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 10 23:44:51 np0005480824 podman[265896]: 2025-10-11 03:44:51.615733251 +0000 UTC m=+0.208043984 container start 42a1e3d57b4f651c1d1f89c09be2f029175a2fb3e798dd336e38f688d04303e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 10 23:44:51 np0005480824 podman[265896]: 2025-10-11 03:44:51.619005938 +0000 UTC m=+0.211316671 container attach 42a1e3d57b4f651c1d1f89c09be2f029175a2fb3e798dd336e38f688d04303e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:44:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:52 np0005480824 admiring_hypatia[265912]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:44:52 np0005480824 admiring_hypatia[265912]: --> relative data size: 1.0
Oct 10 23:44:52 np0005480824 admiring_hypatia[265912]: --> All data devices are unavailable
Oct 10 23:44:52 np0005480824 systemd[1]: libpod-42a1e3d57b4f651c1d1f89c09be2f029175a2fb3e798dd336e38f688d04303e6.scope: Deactivated successfully.
Oct 10 23:44:52 np0005480824 podman[265896]: 2025-10-11 03:44:52.680571824 +0000 UTC m=+1.272882557 container died 42a1e3d57b4f651c1d1f89c09be2f029175a2fb3e798dd336e38f688d04303e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:44:52 np0005480824 systemd[1]: libpod-42a1e3d57b4f651c1d1f89c09be2f029175a2fb3e798dd336e38f688d04303e6.scope: Consumed 1.014s CPU time.
Oct 10 23:44:52 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6466238e8e8d9d8ba83b8e17527fd793b7337b306e5d1309e3aef8b64ee341cf-merged.mount: Deactivated successfully.
Oct 10 23:44:52 np0005480824 podman[265896]: 2025-10-11 03:44:52.745891597 +0000 UTC m=+1.338202320 container remove 42a1e3d57b4f651c1d1f89c09be2f029175a2fb3e798dd336e38f688d04303e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct 10 23:44:52 np0005480824 systemd[1]: libpod-conmon-42a1e3d57b4f651c1d1f89c09be2f029175a2fb3e798dd336e38f688d04303e6.scope: Deactivated successfully.
Oct 10 23:44:53 np0005480824 podman[266096]: 2025-10-11 03:44:53.561042725 +0000 UTC m=+0.058173914 container create baf9d3fb9af44b87fa4c5d73e71e52eb37865d9a512f75cb4eb94fc7070ae117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:44:53 np0005480824 systemd[1]: Started libpod-conmon-baf9d3fb9af44b87fa4c5d73e71e52eb37865d9a512f75cb4eb94fc7070ae117.scope.
Oct 10 23:44:53 np0005480824 podman[266096]: 2025-10-11 03:44:53.534047688 +0000 UTC m=+0.031178957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:44:53 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:44:53 np0005480824 podman[266096]: 2025-10-11 03:44:53.658519837 +0000 UTC m=+0.155651106 container init baf9d3fb9af44b87fa4c5d73e71e52eb37865d9a512f75cb4eb94fc7070ae117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 23:44:53 np0005480824 podman[266096]: 2025-10-11 03:44:53.672413665 +0000 UTC m=+0.169544844 container start baf9d3fb9af44b87fa4c5d73e71e52eb37865d9a512f75cb4eb94fc7070ae117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 23:44:53 np0005480824 podman[266096]: 2025-10-11 03:44:53.67685649 +0000 UTC m=+0.173987759 container attach baf9d3fb9af44b87fa4c5d73e71e52eb37865d9a512f75cb4eb94fc7070ae117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:44:53 np0005480824 elastic_lumiere[266113]: 167 167
Oct 10 23:44:53 np0005480824 systemd[1]: libpod-baf9d3fb9af44b87fa4c5d73e71e52eb37865d9a512f75cb4eb94fc7070ae117.scope: Deactivated successfully.
Oct 10 23:44:53 np0005480824 podman[266096]: 2025-10-11 03:44:53.683803604 +0000 UTC m=+0.180934813 container died baf9d3fb9af44b87fa4c5d73e71e52eb37865d9a512f75cb4eb94fc7070ae117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:44:53 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c56b060adc97b4c22e8f5bcc2f75cb90e542818fc901680d22be9c2f3fb538ea-merged.mount: Deactivated successfully.
Oct 10 23:44:53 np0005480824 podman[266096]: 2025-10-11 03:44:53.734162113 +0000 UTC m=+0.231293282 container remove baf9d3fb9af44b87fa4c5d73e71e52eb37865d9a512f75cb4eb94fc7070ae117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:44:53 np0005480824 systemd[1]: libpod-conmon-baf9d3fb9af44b87fa4c5d73e71e52eb37865d9a512f75cb4eb94fc7070ae117.scope: Deactivated successfully.
Oct 10 23:44:53 np0005480824 podman[266137]: 2025-10-11 03:44:53.924715643 +0000 UTC m=+0.061510744 container create da11c49682196b85c39b143e1f64bdd7ed17ea3956d8b3d0f4ccbb04a060763f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hellman, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:44:53 np0005480824 systemd[1]: Started libpod-conmon-da11c49682196b85c39b143e1f64bdd7ed17ea3956d8b3d0f4ccbb04a060763f.scope.
Oct 10 23:44:53 np0005480824 podman[266137]: 2025-10-11 03:44:53.895727418 +0000 UTC m=+0.032522589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:44:54 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:44:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c895b0357be8cba2c89db753400946538b240a3bea5edbe6bb9489ba7a66fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:44:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c895b0357be8cba2c89db753400946538b240a3bea5edbe6bb9489ba7a66fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:44:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c895b0357be8cba2c89db753400946538b240a3bea5edbe6bb9489ba7a66fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:44:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c895b0357be8cba2c89db753400946538b240a3bea5edbe6bb9489ba7a66fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:44:54 np0005480824 podman[266137]: 2025-10-11 03:44:54.038055249 +0000 UTC m=+0.174850430 container init da11c49682196b85c39b143e1f64bdd7ed17ea3956d8b3d0f4ccbb04a060763f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hellman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 10 23:44:54 np0005480824 podman[266137]: 2025-10-11 03:44:54.052282695 +0000 UTC m=+0.189077786 container start da11c49682196b85c39b143e1f64bdd7ed17ea3956d8b3d0f4ccbb04a060763f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 23:44:54 np0005480824 podman[266137]: 2025-10-11 03:44:54.055787238 +0000 UTC m=+0.192582369 container attach da11c49682196b85c39b143e1f64bdd7ed17ea3956d8b3d0f4ccbb04a060763f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hellman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 23:44:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:44:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]: {
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:    "0": [
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:        {
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "devices": [
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "/dev/loop3"
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            ],
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_name": "ceph_lv0",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_size": "21470642176",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "name": "ceph_lv0",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "tags": {
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.cluster_name": "ceph",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.crush_device_class": "",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.encrypted": "0",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.osd_id": "0",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.type": "block",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.vdo": "0"
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            },
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "type": "block",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "vg_name": "ceph_vg0"
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:        }
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:    ],
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:    "1": [
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:        {
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "devices": [
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "/dev/loop4"
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            ],
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_name": "ceph_lv1",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_size": "21470642176",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "name": "ceph_lv1",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "tags": {
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.cluster_name": "ceph",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.crush_device_class": "",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.encrypted": "0",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.osd_id": "1",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.type": "block",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.vdo": "0"
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            },
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "type": "block",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "vg_name": "ceph_vg1"
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:        }
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:    ],
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:    "2": [
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:        {
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "devices": [
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "/dev/loop5"
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            ],
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_name": "ceph_lv2",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_size": "21470642176",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "name": "ceph_lv2",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "tags": {
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.cluster_name": "ceph",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.crush_device_class": "",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.encrypted": "0",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.osd_id": "2",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.type": "block",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:                "ceph.vdo": "0"
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            },
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "type": "block",
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:            "vg_name": "ceph_vg2"
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:        }
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]:    ]
Oct 10 23:44:54 np0005480824 stupefied_hellman[266154]: }
Oct 10 23:44:54 np0005480824 systemd[1]: libpod-da11c49682196b85c39b143e1f64bdd7ed17ea3956d8b3d0f4ccbb04a060763f.scope: Deactivated successfully.
Oct 10 23:44:54 np0005480824 podman[266137]: 2025-10-11 03:44:54.772234405 +0000 UTC m=+0.909029496 container died da11c49682196b85c39b143e1f64bdd7ed17ea3956d8b3d0f4ccbb04a060763f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hellman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:44:54 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c0c895b0357be8cba2c89db753400946538b240a3bea5edbe6bb9489ba7a66fc-merged.mount: Deactivated successfully.
Oct 10 23:44:54 np0005480824 podman[266137]: 2025-10-11 03:44:54.835044808 +0000 UTC m=+0.971839889 container remove da11c49682196b85c39b143e1f64bdd7ed17ea3956d8b3d0f4ccbb04a060763f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hellman, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:44:54 np0005480824 systemd[1]: libpod-conmon-da11c49682196b85c39b143e1f64bdd7ed17ea3956d8b3d0f4ccbb04a060763f.scope: Deactivated successfully.
Oct 10 23:44:55 np0005480824 podman[266316]: 2025-10-11 03:44:55.605297276 +0000 UTC m=+0.037039945 container create 2a98e8ec341a342caccf8d463ef45b6433fdf662a3d0a90d801f0c248b5688d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:44:55 np0005480824 systemd[1]: Started libpod-conmon-2a98e8ec341a342caccf8d463ef45b6433fdf662a3d0a90d801f0c248b5688d1.scope.
Oct 10 23:44:55 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:44:55 np0005480824 podman[266316]: 2025-10-11 03:44:55.589835001 +0000 UTC m=+0.021577690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:44:55 np0005480824 podman[266316]: 2025-10-11 03:44:55.683526393 +0000 UTC m=+0.115269092 container init 2a98e8ec341a342caccf8d463ef45b6433fdf662a3d0a90d801f0c248b5688d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:44:55 np0005480824 podman[266316]: 2025-10-11 03:44:55.697695458 +0000 UTC m=+0.129438127 container start 2a98e8ec341a342caccf8d463ef45b6433fdf662a3d0a90d801f0c248b5688d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 10 23:44:55 np0005480824 podman[266316]: 2025-10-11 03:44:55.700990476 +0000 UTC m=+0.132733175 container attach 2a98e8ec341a342caccf8d463ef45b6433fdf662a3d0a90d801f0c248b5688d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:44:55 np0005480824 blissful_keller[266333]: 167 167
Oct 10 23:44:55 np0005480824 systemd[1]: libpod-2a98e8ec341a342caccf8d463ef45b6433fdf662a3d0a90d801f0c248b5688d1.scope: Deactivated successfully.
Oct 10 23:44:55 np0005480824 conmon[266333]: conmon 2a98e8ec341a342caccf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a98e8ec341a342caccf8d463ef45b6433fdf662a3d0a90d801f0c248b5688d1.scope/container/memory.events
Oct 10 23:44:55 np0005480824 podman[266316]: 2025-10-11 03:44:55.705500793 +0000 UTC m=+0.137243452 container died 2a98e8ec341a342caccf8d463ef45b6433fdf662a3d0a90d801f0c248b5688d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 23:44:55 np0005480824 systemd[1]: var-lib-containers-storage-overlay-326c2800ecae20a544d0ca1169de626cd755be23bf9c84b1e885b8e873890084-merged.mount: Deactivated successfully.
Oct 10 23:44:55 np0005480824 podman[266316]: 2025-10-11 03:44:55.736613577 +0000 UTC m=+0.168356246 container remove 2a98e8ec341a342caccf8d463ef45b6433fdf662a3d0a90d801f0c248b5688d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 10 23:44:55 np0005480824 systemd[1]: libpod-conmon-2a98e8ec341a342caccf8d463ef45b6433fdf662a3d0a90d801f0c248b5688d1.scope: Deactivated successfully.
Oct 10 23:44:55 np0005480824 podman[266357]: 2025-10-11 03:44:55.905993047 +0000 UTC m=+0.036498933 container create 85a0f2c73207478d6e97fd87e926eafe53b5e7505abc7728f947746c23543b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_matsumoto, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 10 23:44:55 np0005480824 systemd[1]: Started libpod-conmon-85a0f2c73207478d6e97fd87e926eafe53b5e7505abc7728f947746c23543b84.scope.
Oct 10 23:44:55 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:44:55 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5299a61e4dba7c8627aedb5f7821c250abdf54c595befcd953a8d6d3f47efdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:44:55 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5299a61e4dba7c8627aedb5f7821c250abdf54c595befcd953a8d6d3f47efdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:44:55 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5299a61e4dba7c8627aedb5f7821c250abdf54c595befcd953a8d6d3f47efdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:44:55 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5299a61e4dba7c8627aedb5f7821c250abdf54c595befcd953a8d6d3f47efdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:44:55 np0005480824 podman[266357]: 2025-10-11 03:44:55.985048524 +0000 UTC m=+0.115554410 container init 85a0f2c73207478d6e97fd87e926eafe53b5e7505abc7728f947746c23543b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_matsumoto, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:44:55 np0005480824 podman[266357]: 2025-10-11 03:44:55.889936638 +0000 UTC m=+0.020442534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:44:55 np0005480824 podman[266357]: 2025-10-11 03:44:55.990972123 +0000 UTC m=+0.121478019 container start 85a0f2c73207478d6e97fd87e926eafe53b5e7505abc7728f947746c23543b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 23:44:55 np0005480824 podman[266357]: 2025-10-11 03:44:55.994040156 +0000 UTC m=+0.124546042 container attach 85a0f2c73207478d6e97fd87e926eafe53b5e7505abc7728f947746c23543b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_matsumoto, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:44:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]: {
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "osd_id": 0,
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "type": "bluestore"
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:    },
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "osd_id": 1,
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "type": "bluestore"
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:    },
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "osd_id": 2,
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:        "type": "bluestore"
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]:    }
Oct 10 23:44:57 np0005480824 confident_matsumoto[266374]: }
Oct 10 23:44:57 np0005480824 systemd[1]: libpod-85a0f2c73207478d6e97fd87e926eafe53b5e7505abc7728f947746c23543b84.scope: Deactivated successfully.
Oct 10 23:44:57 np0005480824 podman[266357]: 2025-10-11 03:44:57.061462841 +0000 UTC m=+1.191968737 container died 85a0f2c73207478d6e97fd87e926eafe53b5e7505abc7728f947746c23543b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 10 23:44:57 np0005480824 systemd[1]: libpod-85a0f2c73207478d6e97fd87e926eafe53b5e7505abc7728f947746c23543b84.scope: Consumed 1.073s CPU time.
Oct 10 23:44:57 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e5299a61e4dba7c8627aedb5f7821c250abdf54c595befcd953a8d6d3f47efdc-merged.mount: Deactivated successfully.
Oct 10 23:44:57 np0005480824 podman[266357]: 2025-10-11 03:44:57.12408397 +0000 UTC m=+1.254589856 container remove 85a0f2c73207478d6e97fd87e926eafe53b5e7505abc7728f947746c23543b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 10 23:44:57 np0005480824 systemd[1]: libpod-conmon-85a0f2c73207478d6e97fd87e926eafe53b5e7505abc7728f947746c23543b84.scope: Deactivated successfully.
Oct 10 23:44:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:44:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:44:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:44:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:44:57 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 73a7af1e-e562-4a0c-9f64-84c9b24fc6cf does not exist
Oct 10 23:44:57 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 14ead068-18b2-42bc-b441-34577dc32415 does not exist
Oct 10 23:44:57 np0005480824 podman[266408]: 2025-10-11 03:44:57.210862649 +0000 UTC m=+0.103111476 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 10 23:44:57 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:44:57 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:44:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:44:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:44:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:44:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:44:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:44:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:44:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.830806) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154298830850, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2053, "num_deletes": 251, "total_data_size": 3453768, "memory_usage": 3511104, "flush_reason": "Manual Compaction"}
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154298848480, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3378172, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16341, "largest_seqno": 18393, "table_properties": {"data_size": 3368867, "index_size": 5863, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18466, "raw_average_key_size": 19, "raw_value_size": 3350342, "raw_average_value_size": 3598, "num_data_blocks": 266, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154071, "oldest_key_time": 1760154071, "file_creation_time": 1760154298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 17728 microseconds, and 8678 cpu microseconds.
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.848532) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3378172 bytes OK
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.848556) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.850303) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.850319) EVENT_LOG_v1 {"time_micros": 1760154298850314, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.850344) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3445170, prev total WAL file size 3445170, number of live WAL files 2.
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.851500) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3298KB)], [38(7576KB)]
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154298851563, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11136950, "oldest_snapshot_seqno": -1}
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4421 keys, 9361609 bytes, temperature: kUnknown
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154298905313, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9361609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9328278, "index_size": 21189, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 106972, "raw_average_key_size": 24, "raw_value_size": 9244577, "raw_average_value_size": 2091, "num_data_blocks": 901, "num_entries": 4421, "num_filter_entries": 4421, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760154298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.905573) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9361609 bytes
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.907123) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.9 rd, 173.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4935, records dropped: 514 output_compression: NoCompression
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.907143) EVENT_LOG_v1 {"time_micros": 1760154298907133, "job": 18, "event": "compaction_finished", "compaction_time_micros": 53831, "compaction_time_cpu_micros": 27537, "output_level": 6, "num_output_files": 1, "total_output_size": 9361609, "num_input_records": 4935, "num_output_records": 4421, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154298907870, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154298909311, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.851377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.909485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.909492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.909493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.909495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:44:58 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:58.909496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.417789) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154299417867, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 255, "num_deletes": 250, "total_data_size": 13828, "memory_usage": 19712, "flush_reason": "Manual Compaction"}
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154299422714, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 13577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18394, "largest_seqno": 18648, "table_properties": {"data_size": 11825, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 4874, "raw_average_key_size": 19, "raw_value_size": 8443, "raw_average_value_size": 33, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154299, "oldest_key_time": 1760154299, "file_creation_time": 1760154299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4974 microseconds, and 1230 cpu microseconds.
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.422768) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 13577 bytes OK
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.422791) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.424407) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.424434) EVENT_LOG_v1 {"time_micros": 1760154299424425, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.424461) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 11817, prev total WAL file size 11817, number of live WAL files 2.
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.425057) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(13KB)], [41(9142KB)]
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154299425108, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9375186, "oldest_snapshot_seqno": -1}
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4172 keys, 6087971 bytes, temperature: kUnknown
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154299474964, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6087971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6061010, "index_size": 15460, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 102240, "raw_average_key_size": 24, "raw_value_size": 5986274, "raw_average_value_size": 1434, "num_data_blocks": 652, "num_entries": 4172, "num_filter_entries": 4172, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760154299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.475352) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6087971 bytes
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.476951) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.6 rd, 121.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 8.9 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(1138.9) write-amplify(448.4) OK, records in: 4676, records dropped: 504 output_compression: NoCompression
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.476984) EVENT_LOG_v1 {"time_micros": 1760154299476968, "job": 20, "event": "compaction_finished", "compaction_time_micros": 49977, "compaction_time_cpu_micros": 33533, "output_level": 6, "num_output_files": 1, "total_output_size": 6087971, "num_input_records": 4676, "num_output_records": 4172, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154299477240, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154299481071, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.424948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.481345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.481352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.481354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.481359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:44:59 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:44:59.481363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:45:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:45:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 10 23:45:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 10 23:45:00 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 10 23:45:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 10 23:45:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 10 23:45:01 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 10 23:45:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 8.4 MiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.0 MiB/s wr, 9 op/s
Oct 10 23:45:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 10 23:45:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 10 23:45:02 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 10 23:45:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:45:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 8.4 MiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.3 MiB/s wr, 13 op/s
Oct 10 23:45:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 10 23:45:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 10 23:45:04 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 10 23:45:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 8.4 MiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.4 MiB/s wr, 14 op/s
Oct 10 23:45:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 6.2 MiB/s wr, 57 op/s
Oct 10 23:45:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:45:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 10 23:45:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 10 23:45:09 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 10 23:45:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.3 MiB/s wr, 39 op/s
Oct 10 23:45:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:45:10.484 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:45:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:45:10.485 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:45:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:45:10.485 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:45:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 4.1 MiB/s wr, 37 op/s
Oct 10 23:45:13 np0005480824 podman[266488]: 2025-10-11 03:45:13.058995754 +0000 UTC m=+0.112950219 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:45:13 np0005480824 podman[266487]: 2025-10-11 03:45:13.065840455 +0000 UTC m=+0.114623607 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 23:45:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:45:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Oct 10 23:45:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.3 MiB/s wr, 30 op/s
Oct 10 23:45:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:45:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:45:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:45:20.069 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:45:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:45:20.070 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:45:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:45:21 np0005480824 podman[266524]: 2025-10-11 03:45:21.07469063 +0000 UTC m=+0.127246126 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 23:45:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:45:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 10 23:45:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 10 23:45:22 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 10 23:45:23 np0005480824 nova_compute[260089]: 2025-10-11 03:45:23.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:45:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 10 23:45:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 10 23:45:23 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 10 23:45:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:45:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:45:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:45:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2581694260' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:45:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:45:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2581694260' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:45:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 10 23:45:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 10 23:45:25 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 10 23:45:25 np0005480824 nova_compute[260089]: 2025-10-11 03:45:25.293 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:45:25 np0005480824 nova_compute[260089]: 2025-10-11 03:45:25.293 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:45:26 np0005480824 nova_compute[260089]: 2025-10-11 03:45:26.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:45:26 np0005480824 nova_compute[260089]: 2025-10-11 03:45:26.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:45:26 np0005480824 nova_compute[260089]: 2025-10-11 03:45:26.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:45:26 np0005480824 nova_compute[260089]: 2025-10-11 03:45:26.312 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 23:45:26 np0005480824 nova_compute[260089]: 2025-10-11 03:45:26.313 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:45:26 np0005480824 nova_compute[260089]: 2025-10-11 03:45:26.314 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:45:26 np0005480824 nova_compute[260089]: 2025-10-11 03:45:26.342 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:45:26 np0005480824 nova_compute[260089]: 2025-10-11 03:45:26.343 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:45:26 np0005480824 nova_compute[260089]: 2025-10-11 03:45:26.343 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:45:26 np0005480824 nova_compute[260089]: 2025-10-11 03:45:26.344 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:45:26 np0005480824 nova_compute[260089]: 2025-10-11 03:45:26.344 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:45:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:45:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:45:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231193753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:45:26 np0005480824 nova_compute[260089]: 2025-10-11 03:45:26.844 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:45:27 np0005480824 nova_compute[260089]: 2025-10-11 03:45:27.148 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:45:27 np0005480824 nova_compute[260089]: 2025-10-11 03:45:27.149 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5181MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:45:27 np0005480824 nova_compute[260089]: 2025-10-11 03:45:27.150 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:45:27 np0005480824 nova_compute[260089]: 2025-10-11 03:45:27.150 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:45:27 np0005480824 nova_compute[260089]: 2025-10-11 03:45:27.227 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:45:27 np0005480824 nova_compute[260089]: 2025-10-11 03:45:27.228 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:45:27 np0005480824 nova_compute[260089]: 2025-10-11 03:45:27.245 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:45:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:45:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228733556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:45:27 np0005480824 nova_compute[260089]: 2025-10-11 03:45:27.716 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:45:27 np0005480824 nova_compute[260089]: 2025-10-11 03:45:27.724 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:45:27 np0005480824 nova_compute[260089]: 2025-10-11 03:45:27.745 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:45:27 np0005480824 nova_compute[260089]: 2025-10-11 03:45:27.746 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:45:27 np0005480824 nova_compute[260089]: 2025-10-11 03:45:27.747 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:45:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:45:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:45:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:45:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:45:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:45:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:45:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:45:27
Oct 10 23:45:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:45:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:45:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.control', '.mgr', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.rgw.root', 'images']
Oct 10 23:45:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:45:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 10 23:45:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 10 23:45:28 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 10 23:45:28 np0005480824 podman[266595]: 2025-10-11 03:45:28.06826931 +0000 UTC m=+0.110701075 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 23:45:28 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:45:28.071 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:45:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:45:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:45:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:45:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:45:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:45:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:45:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:45:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:45:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:45:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:45:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.8 KiB/s wr, 16 op/s
Oct 10 23:45:28 np0005480824 nova_compute[260089]: 2025-10-11 03:45:28.730 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:45:28 np0005480824 nova_compute[260089]: 2025-10-11 03:45:28.731 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:45:28 np0005480824 nova_compute[260089]: 2025-10-11 03:45:28.732 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:45:28 np0005480824 nova_compute[260089]: 2025-10-11 03:45:28.732 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:45:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:45:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 10 23:45:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 10 23:45:29 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 10 23:45:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:45:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1924678903' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:45:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:45:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1924678903' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:45:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.7 KiB/s wr, 15 op/s
Oct 10 23:45:31 np0005480824 nova_compute[260089]: 2025-10-11 03:45:31.299 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:45:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:45:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3715901299' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:45:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:45:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3715901299' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:45:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 4.8 KiB/s wr, 93 op/s
Oct 10 23:45:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 10 23:45:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 10 23:45:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 10 23:45:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:45:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 4.3 KiB/s wr, 96 op/s
Oct 10 23:45:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:45:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/295556967' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:45:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:45:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/295556967' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:45:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 3.5 KiB/s wr, 78 op/s
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:45:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:45:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 4.6 KiB/s wr, 111 op/s
Oct 10 23:45:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:45:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 10 23:45:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 10 23:45:39 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 10 23:45:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.0 KiB/s wr, 50 op/s
Oct 10 23:45:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.6 KiB/s wr, 48 op/s
Oct 10 23:45:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 10 23:45:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 10 23:45:43 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 10 23:45:44 np0005480824 podman[266614]: 2025-10-11 03:45:44.01399086 +0000 UTC m=+0.064902132 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 10 23:45:44 np0005480824 podman[266615]: 2025-10-11 03:45:44.040658059 +0000 UTC m=+0.087580337 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 10 23:45:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 10 23:45:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 10 23:45:44 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 10 23:45:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 23:45:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Oct 10 23:45:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 10 23:45:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 10 23:45:46 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 10 23:45:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Oct 10 23:45:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 10 23:45:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 10 23:45:47 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 10 23:45:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 53 KiB/s wr, 81 op/s
Oct 10 23:45:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:45:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 10 23:45:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 10 23:45:49 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 10 23:45:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:45:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/925191586' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:45:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:45:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/925191586' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:45:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 47 KiB/s wr, 72 op/s
Oct 10 23:45:52 np0005480824 podman[266653]: 2025-10-11 03:45:52.079480955 +0000 UTC m=+0.135213571 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 10 23:45:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 46 KiB/s wr, 122 op/s
Oct 10 23:45:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 10 23:45:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 10 23:45:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 10 23:45:54 np0005480824 nova_compute[260089]: 2025-10-11 03:45:54.251 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Acquiring lock "349e8a73-9a19-4cee-89a9-50edc475a575" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:45:54 np0005480824 nova_compute[260089]: 2025-10-11 03:45:54.251 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:45:54 np0005480824 nova_compute[260089]: 2025-10-11 03:45:54.278 2 DEBUG nova.compute.manager [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:45:54 np0005480824 nova_compute[260089]: 2025-10-11 03:45:54.434 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:45:54 np0005480824 nova_compute[260089]: 2025-10-11 03:45:54.435 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:45:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:45:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 10 23:45:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 10 23:45:54 np0005480824 nova_compute[260089]: 2025-10-11 03:45:54.443 2 DEBUG nova.virt.hardware [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:45:54 np0005480824 nova_compute[260089]: 2025-10-11 03:45:54.444 2 INFO nova.compute.claims [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:45:54 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 10 23:45:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.7 KiB/s wr, 56 op/s
Oct 10 23:45:54 np0005480824 nova_compute[260089]: 2025-10-11 03:45:54.576 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:45:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:45:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/295599276' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:45:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:45:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/295599276' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:45:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:45:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/909600222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.040 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.049 2 DEBUG nova.compute.provider_tree [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.070 2 DEBUG nova.scheduler.client.report [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.101 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.102 2 DEBUG nova.compute.manager [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.159 2 DEBUG nova.compute.manager [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.159 2 DEBUG nova.network.neutron [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.187 2 INFO nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.213 2 DEBUG nova.compute.manager [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.296 2 DEBUG nova.compute.manager [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.297 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.297 2 INFO nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Creating image(s)#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.324 2 DEBUG nova.storage.rbd_utils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] rbd image 349e8a73-9a19-4cee-89a9-50edc475a575_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.349 2 DEBUG nova.storage.rbd_utils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] rbd image 349e8a73-9a19-4cee-89a9-50edc475a575_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.375 2 DEBUG nova.storage.rbd_utils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] rbd image 349e8a73-9a19-4cee-89a9-50edc475a575_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.379 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.380 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:45:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 10 23:45:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 10 23:45:55 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 10 23:45:55 np0005480824 nova_compute[260089]: 2025-10-11 03:45:55.841 2 DEBUG nova.virt.libvirt.imagebackend [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Image locations are: [{'url': 'rbd://92cfe4d4-4917-5be1-9d00-73758793a62b/images/7caca022-7dcc-40a9-8bd8-eb7d91b29390/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://92cfe4d4-4917-5be1-9d00-73758793a62b/images/7caca022-7dcc-40a9-8bd8-eb7d91b29390/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 10 23:45:56 np0005480824 nova_compute[260089]: 2025-10-11 03:45:56.021 2 WARNING oslo_policy.policy [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct 10 23:45:56 np0005480824 nova_compute[260089]: 2025-10-11 03:45:56.021 2 WARNING oslo_policy.policy [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct 10 23:45:56 np0005480824 nova_compute[260089]: 2025-10-11 03:45:56.024 2 DEBUG nova.policy [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f7e60061152c4dbb80545545c356cabc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d7871f9f8a74d2d85dc275b42df9042', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:45:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.7 KiB/s wr, 56 op/s
Oct 10 23:45:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:45:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1294692779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:45:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:45:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1294692779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:45:56 np0005480824 nova_compute[260089]: 2025-10-11 03:45:56.841 2 DEBUG nova.network.neutron [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Successfully created port: ae972b5d-4250-48ac-9b8a-e35678042b82 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.197 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.262 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e.part --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.264 2 DEBUG nova.virt.images [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] 7caca022-7dcc-40a9-8bd8-eb7d91b29390 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.266 2 DEBUG nova.privsep.utils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.267 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e.part /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.516 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e.part /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e.converted" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.526 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.614 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e.converted --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.616 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.652 2 DEBUG nova.storage.rbd_utils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] rbd image 349e8a73-9a19-4cee-89a9-50edc475a575_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.660 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 349e8a73-9a19-4cee-89a9-50edc475a575_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:45:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:45:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:45:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:45:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:45:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:45:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.932 2 DEBUG nova.network.neutron [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Successfully updated port: ae972b5d-4250-48ac-9b8a-e35678042b82 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.955 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Acquiring lock "refresh_cache-349e8a73-9a19-4cee-89a9-50edc475a575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.956 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Acquired lock "refresh_cache-349e8a73-9a19-4cee-89a9-50edc475a575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:45:57 np0005480824 nova_compute[260089]: 2025-10-11 03:45:57.956 2 DEBUG nova.network.neutron [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:45:58 np0005480824 nova_compute[260089]: 2025-10-11 03:45:58.406 2 DEBUG nova.compute.manager [req-e3ce9cff-aae5-4d6b-833d-860083a1cc5c req-272b0c30-ed6c-48da-896a-c56918444e74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Received event network-changed-ae972b5d-4250-48ac-9b8a-e35678042b82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:45:58 np0005480824 nova_compute[260089]: 2025-10-11 03:45:58.408 2 DEBUG nova.compute.manager [req-e3ce9cff-aae5-4d6b-833d-860083a1cc5c req-272b0c30-ed6c-48da-896a-c56918444e74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Refreshing instance network info cache due to event network-changed-ae972b5d-4250-48ac-9b8a-e35678042b82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:45:58 np0005480824 nova_compute[260089]: 2025-10-11 03:45:58.408 2 DEBUG oslo_concurrency.lockutils [req-e3ce9cff-aae5-4d6b-833d-860083a1cc5c req-272b0c30-ed6c-48da-896a-c56918444e74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-349e8a73-9a19-4cee-89a9-50edc475a575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:45:58 np0005480824 nova_compute[260089]: 2025-10-11 03:45:58.418 2 DEBUG nova.network.neutron [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:45:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.2 KiB/s wr, 103 op/s
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:45:58 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev af99fe9d-b88e-4aff-919e-eea6a39df2a3 does not exist
Oct 10 23:45:58 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev ad6c2fcf-47a0-4c43-a5d1-19834d13d865 does not exist
Oct 10 23:45:58 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 49207483-de47-4281-bb55-b7fcf885b32a does not exist
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 10 23:45:58 np0005480824 podman[266962]: 2025-10-11 03:45:58.69038025 +0000 UTC m=+0.069984452 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34514600' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:45:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34514600' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:45:59 np0005480824 nova_compute[260089]: 2025-10-11 03:45:59.173 2 DEBUG nova.network.neutron [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Updating instance_info_cache with network_info: [{"id": "ae972b5d-4250-48ac-9b8a-e35678042b82", "address": "fa:16:3e:68:49:57", "network": {"id": "fa909145-5687-40b4-825d-ce6ac3b98885", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-311959019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d7871f9f8a74d2d85dc275b42df9042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae972b5d-42", "ovs_interfaceid": "ae972b5d-4250-48ac-9b8a-e35678042b82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:45:59 np0005480824 nova_compute[260089]: 2025-10-11 03:45:59.193 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Releasing lock "refresh_cache-349e8a73-9a19-4cee-89a9-50edc475a575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:45:59 np0005480824 nova_compute[260089]: 2025-10-11 03:45:59.193 2 DEBUG nova.compute.manager [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Instance network_info: |[{"id": "ae972b5d-4250-48ac-9b8a-e35678042b82", "address": "fa:16:3e:68:49:57", "network": {"id": "fa909145-5687-40b4-825d-ce6ac3b98885", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-311959019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d7871f9f8a74d2d85dc275b42df9042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae972b5d-42", "ovs_interfaceid": "ae972b5d-4250-48ac-9b8a-e35678042b82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:45:59 np0005480824 nova_compute[260089]: 2025-10-11 03:45:59.194 2 DEBUG oslo_concurrency.lockutils [req-e3ce9cff-aae5-4d6b-833d-860083a1cc5c req-272b0c30-ed6c-48da-896a-c56918444e74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-349e8a73-9a19-4cee-89a9-50edc475a575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:45:59 np0005480824 nova_compute[260089]: 2025-10-11 03:45:59.194 2 DEBUG nova.network.neutron [req-e3ce9cff-aae5-4d6b-833d-860083a1cc5c req-272b0c30-ed6c-48da-896a-c56918444e74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Refreshing network info cache for port ae972b5d-4250-48ac-9b8a-e35678042b82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:45:59 np0005480824 podman[267099]: 2025-10-11 03:45:59.302010006 +0000 UTC m=+0.069237233 container create ec7571295c549971051c05ffad0a74efe7ea9d186516385c33b74816d620a2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_montalcini, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:45:59 np0005480824 systemd[1]: Started libpod-conmon-ec7571295c549971051c05ffad0a74efe7ea9d186516385c33b74816d620a2fd.scope.
Oct 10 23:45:59 np0005480824 podman[267099]: 2025-10-11 03:45:59.274666802 +0000 UTC m=+0.041894039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:45:59 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:45:59 np0005480824 podman[267099]: 2025-10-11 03:45:59.434333488 +0000 UTC m=+0.201560765 container init ec7571295c549971051c05ffad0a74efe7ea9d186516385c33b74816d620a2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_montalcini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 10 23:45:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:45:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 10 23:45:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 10 23:45:59 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 10 23:45:59 np0005480824 podman[267099]: 2025-10-11 03:45:59.453995202 +0000 UTC m=+0.221222419 container start ec7571295c549971051c05ffad0a74efe7ea9d186516385c33b74816d620a2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:45:59 np0005480824 podman[267099]: 2025-10-11 03:45:59.465941554 +0000 UTC m=+0.233168831 container attach ec7571295c549971051c05ffad0a74efe7ea9d186516385c33b74816d620a2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_montalcini, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:45:59 np0005480824 crazy_montalcini[267116]: 167 167
Oct 10 23:45:59 np0005480824 systemd[1]: libpod-ec7571295c549971051c05ffad0a74efe7ea9d186516385c33b74816d620a2fd.scope: Deactivated successfully.
Oct 10 23:45:59 np0005480824 podman[267099]: 2025-10-11 03:45:59.470373638 +0000 UTC m=+0.237600885 container died ec7571295c549971051c05ffad0a74efe7ea9d186516385c33b74816d620a2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_montalcini, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 10 23:45:59 np0005480824 systemd[1]: var-lib-containers-storage-overlay-cfe0b01774b93251eb6f4ac731b1db945d6fab600cf39984e496387aab44c38d-merged.mount: Deactivated successfully.
Oct 10 23:45:59 np0005480824 podman[267099]: 2025-10-11 03:45:59.552691789 +0000 UTC m=+0.319919016 container remove ec7571295c549971051c05ffad0a74efe7ea9d186516385c33b74816d620a2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_montalcini, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:45:59 np0005480824 systemd[1]: libpod-conmon-ec7571295c549971051c05ffad0a74efe7ea9d186516385c33b74816d620a2fd.scope: Deactivated successfully.
Oct 10 23:45:59 np0005480824 nova_compute[260089]: 2025-10-11 03:45:59.767 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 349e8a73-9a19-4cee-89a9-50edc475a575_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:45:59 np0005480824 podman[267144]: 2025-10-11 03:45:59.823232321 +0000 UTC m=+0.082573528 container create e96214d52183b8661cd31333f4a1b320f7b466a02ffece095cb7a0577da7c568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_thompson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct 10 23:45:59 np0005480824 podman[267144]: 2025-10-11 03:45:59.790210932 +0000 UTC m=+0.049552179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:45:59 np0005480824 systemd[1]: Started libpod-conmon-e96214d52183b8661cd31333f4a1b320f7b466a02ffece095cb7a0577da7c568.scope.
Oct 10 23:45:59 np0005480824 nova_compute[260089]: 2025-10-11 03:45:59.883 2 DEBUG nova.storage.rbd_utils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] resizing rbd image 349e8a73-9a19-4cee-89a9-50edc475a575_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 23:45:59 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:45:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6494312ca113052b736b3babd3cab481f80badc942c4939818de1c5b61d9e411/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:45:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6494312ca113052b736b3babd3cab481f80badc942c4939818de1c5b61d9e411/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:45:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6494312ca113052b736b3babd3cab481f80badc942c4939818de1c5b61d9e411/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:45:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6494312ca113052b736b3babd3cab481f80badc942c4939818de1c5b61d9e411/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:45:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6494312ca113052b736b3babd3cab481f80badc942c4939818de1c5b61d9e411/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:45:59 np0005480824 podman[267144]: 2025-10-11 03:45:59.944195094 +0000 UTC m=+0.203536301 container init e96214d52183b8661cd31333f4a1b320f7b466a02ffece095cb7a0577da7c568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:45:59 np0005480824 podman[267144]: 2025-10-11 03:45:59.960040898 +0000 UTC m=+0.219382105 container start e96214d52183b8661cd31333f4a1b320f7b466a02ffece095cb7a0577da7c568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 23:45:59 np0005480824 podman[267144]: 2025-10-11 03:45:59.964887322 +0000 UTC m=+0.224228529 container attach e96214d52183b8661cd31333f4a1b320f7b466a02ffece095cb7a0577da7c568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.018 2 DEBUG nova.objects.instance [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lazy-loading 'migration_context' on Instance uuid 349e8a73-9a19-4cee-89a9-50edc475a575 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.036 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.037 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Ensure instance console log exists: /var/lib/nova/instances/349e8a73-9a19-4cee-89a9-50edc475a575/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.037 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.038 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.038 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.041 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Start _get_guest_xml network_info=[{"id": "ae972b5d-4250-48ac-9b8a-e35678042b82", "address": "fa:16:3e:68:49:57", "network": {"id": "fa909145-5687-40b4-825d-ce6ac3b98885", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-311959019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d7871f9f8a74d2d85dc275b42df9042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae972b5d-42", "ovs_interfaceid": "ae972b5d-4250-48ac-9b8a-e35678042b82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.049 2 WARNING nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.063 2 DEBUG nova.virt.libvirt.host [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.064 2 DEBUG nova.virt.libvirt.host [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.069 2 DEBUG nova.virt.libvirt.host [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.069 2 DEBUG nova.virt.libvirt.host [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.070 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.070 2 DEBUG nova.virt.hardware [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.071 2 DEBUG nova.virt.hardware [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.071 2 DEBUG nova.virt.hardware [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.071 2 DEBUG nova.virt.hardware [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.071 2 DEBUG nova.virt.hardware [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.072 2 DEBUG nova.virt.hardware [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.072 2 DEBUG nova.virt.hardware [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.072 2 DEBUG nova.virt.hardware [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.073 2 DEBUG nova.virt.hardware [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.073 2 DEBUG nova.virt.hardware [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.073 2 DEBUG nova.virt.hardware [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.077 2 DEBUG nova.privsep.utils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.078 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:46:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.0 KiB/s wr, 102 op/s
Oct 10 23:46:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:46:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040818221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.603 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.644 2 DEBUG nova.storage.rbd_utils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] rbd image 349e8a73-9a19-4cee-89a9-50edc475a575_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.650 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.676 2 DEBUG nova.network.neutron [req-e3ce9cff-aae5-4d6b-833d-860083a1cc5c req-272b0c30-ed6c-48da-896a-c56918444e74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Updated VIF entry in instance network info cache for port ae972b5d-4250-48ac-9b8a-e35678042b82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.678 2 DEBUG nova.network.neutron [req-e3ce9cff-aae5-4d6b-833d-860083a1cc5c req-272b0c30-ed6c-48da-896a-c56918444e74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Updating instance_info_cache with network_info: [{"id": "ae972b5d-4250-48ac-9b8a-e35678042b82", "address": "fa:16:3e:68:49:57", "network": {"id": "fa909145-5687-40b4-825d-ce6ac3b98885", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-311959019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d7871f9f8a74d2d85dc275b42df9042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae972b5d-42", "ovs_interfaceid": "ae972b5d-4250-48ac-9b8a-e35678042b82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:46:00 np0005480824 nova_compute[260089]: 2025-10-11 03:46:00.782 2 DEBUG oslo_concurrency.lockutils [req-e3ce9cff-aae5-4d6b-833d-860083a1cc5c req-272b0c30-ed6c-48da-896a-c56918444e74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-349e8a73-9a19-4cee-89a9-50edc475a575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:46:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:46:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3022141666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.113 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.116 2 DEBUG nova.virt.libvirt.vif [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:45:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1257432866',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1257432866',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1257432866',id=1,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHivZjf50uTfe5OpJKlfHIdAW/c2BXCSbX4ofRlmzc2XEUmfO1Yv5L4WVqCHAsUloIewIBQZTDtfV+tyWSUENKvFw3Qn/LrNIp96ukCD3zVN0Jq7cm4IoZlNQxUHhrCQcA==',key_name='tempest-keypair-606015105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d7871f9f8a74d2d85dc275b42df9042',ramdisk_id='',reservation_id='r-02rulfbj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-1619059606',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-1619059606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:45:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f7e60061152c4dbb80545545c356cabc',uuid=349e8a73-9a19-4cee-89a9-50edc475a575,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae972b5d-4250-48ac-9b8a-e35678042b82", "address": "fa:16:3e:68:49:57", "network": {"id": "fa909145-5687-40b4-825d-ce6ac3b98885", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-311959019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d7871f9f8a74d2d85dc275b42df9042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae972b5d-42", "ovs_interfaceid": "ae972b5d-4250-48ac-9b8a-e35678042b82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.117 2 DEBUG nova.network.os_vif_util [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Converting VIF {"id": "ae972b5d-4250-48ac-9b8a-e35678042b82", "address": "fa:16:3e:68:49:57", "network": {"id": "fa909145-5687-40b4-825d-ce6ac3b98885", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-311959019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d7871f9f8a74d2d85dc275b42df9042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae972b5d-42", "ovs_interfaceid": "ae972b5d-4250-48ac-9b8a-e35678042b82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.119 2 DEBUG nova.network.os_vif_util [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:49:57,bridge_name='br-int',has_traffic_filtering=True,id=ae972b5d-4250-48ac-9b8a-e35678042b82,network=Network(fa909145-5687-40b4-825d-ce6ac3b98885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae972b5d-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:46:01 np0005480824 optimistic_thompson[267197]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.124 2 DEBUG nova.objects.instance [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lazy-loading 'pci_devices' on Instance uuid 349e8a73-9a19-4cee-89a9-50edc475a575 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:46:01 np0005480824 optimistic_thompson[267197]: --> relative data size: 1.0
Oct 10 23:46:01 np0005480824 optimistic_thompson[267197]: --> All data devices are unavailable
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.146 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  <uuid>349e8a73-9a19-4cee-89a9-50edc475a575</uuid>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  <name>instance-00000001</name>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <nova:name>tempest-EncryptedVolumesExtendAttachedTest-instance-1257432866</nova:name>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:46:00</nova:creationTime>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:        <nova:user uuid="f7e60061152c4dbb80545545c356cabc">tempest-EncryptedVolumesExtendAttachedTest-1619059606-project-member</nova:user>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:        <nova:project uuid="6d7871f9f8a74d2d85dc275b42df9042">tempest-EncryptedVolumesExtendAttachedTest-1619059606</nova:project>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:        <nova:port uuid="ae972b5d-4250-48ac-9b8a-e35678042b82">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <entry name="serial">349e8a73-9a19-4cee-89a9-50edc475a575</entry>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <entry name="uuid">349e8a73-9a19-4cee-89a9-50edc475a575</entry>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/349e8a73-9a19-4cee-89a9-50edc475a575_disk">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/349e8a73-9a19-4cee-89a9-50edc475a575_disk.config">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:68:49:57"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <target dev="tapae972b5d-42"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/349e8a73-9a19-4cee-89a9-50edc475a575/console.log" append="off"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:46:01 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:46:01 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:46:01 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:46:01 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.147 2 DEBUG nova.compute.manager [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Preparing to wait for external event network-vif-plugged-ae972b5d-4250-48ac-9b8a-e35678042b82 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.147 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Acquiring lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.148 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.148 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.148 2 DEBUG nova.virt.libvirt.vif [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:45:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1257432866',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1257432866',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1257432866',id=1,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHivZjf50uTfe5OpJKlfHIdAW/c2BXCSbX4ofRlmzc2XEUmfO1Yv5L4WVqCHAsUloIewIBQZTDtfV+tyWSUENKvFw3Qn/LrNIp96ukCD3zVN0Jq7cm4IoZlNQxUHhrCQcA==',key_name='tempest-keypair-606015105',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d7871f9f8a74d2d85dc275b42df9042',ramdisk_id='',reservation_id='r-02rulfbj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-1619059606',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-1619059606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:45:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f7e60061152c4dbb80545545c356cabc',uuid=349e8a73-9a19-4cee-89a9-50edc475a575,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae972b5d-4250-48ac-9b8a-e35678042b82", "address": "fa:16:3e:68:49:57", "network": {"id": "fa909145-5687-40b4-825d-ce6ac3b98885", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-311959019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d7871f9f8a74d2d85dc275b42df9042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae972b5d-42", "ovs_interfaceid": "ae972b5d-4250-48ac-9b8a-e35678042b82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.149 2 DEBUG nova.network.os_vif_util [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Converting VIF {"id": "ae972b5d-4250-48ac-9b8a-e35678042b82", "address": "fa:16:3e:68:49:57", "network": {"id": "fa909145-5687-40b4-825d-ce6ac3b98885", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-311959019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d7871f9f8a74d2d85dc275b42df9042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae972b5d-42", "ovs_interfaceid": "ae972b5d-4250-48ac-9b8a-e35678042b82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.150 2 DEBUG nova.network.os_vif_util [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:49:57,bridge_name='br-int',has_traffic_filtering=True,id=ae972b5d-4250-48ac-9b8a-e35678042b82,network=Network(fa909145-5687-40b4-825d-ce6ac3b98885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae972b5d-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.150 2 DEBUG os_vif [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:49:57,bridge_name='br-int',has_traffic_filtering=True,id=ae972b5d-4250-48ac-9b8a-e35678042b82,network=Network(fa909145-5687-40b4-825d-ce6ac3b98885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae972b5d-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:46:01 np0005480824 systemd[1]: libpod-e96214d52183b8661cd31333f4a1b320f7b466a02ffece095cb7a0577da7c568.scope: Deactivated successfully.
Oct 10 23:46:01 np0005480824 systemd[1]: libpod-e96214d52183b8661cd31333f4a1b320f7b466a02ffece095cb7a0577da7c568.scope: Consumed 1.158s CPU time.
Oct 10 23:46:01 np0005480824 podman[267144]: 2025-10-11 03:46:01.18622803 +0000 UTC m=+1.445569297 container died e96214d52183b8661cd31333f4a1b320f7b466a02ffece095cb7a0577da7c568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_thompson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:46:01 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6494312ca113052b736b3babd3cab481f80badc942c4939818de1c5b61d9e411-merged.mount: Deactivated successfully.
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.236 2 DEBUG ovsdbapp.backend.ovs_idl [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.238 2 DEBUG ovsdbapp.backend.ovs_idl [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.238 2 DEBUG ovsdbapp.backend.ovs_idl [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.256 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.257 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:46:01 np0005480824 nova_compute[260089]: 2025-10-11 03:46:01.258 2 INFO oslo.privsep.daemon [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpqv15d5ht/privsep.sock']#033[00m
Oct 10 23:46:01 np0005480824 podman[267144]: 2025-10-11 03:46:01.282486141 +0000 UTC m=+1.541827338 container remove e96214d52183b8661cd31333f4a1b320f7b466a02ffece095cb7a0577da7c568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:46:01 np0005480824 systemd[1]: libpod-conmon-e96214d52183b8661cd31333f4a1b320f7b466a02ffece095cb7a0577da7c568.scope: Deactivated successfully.
Oct 10 23:46:02 np0005480824 podman[267480]: 2025-10-11 03:46:02.118419589 +0000 UTC m=+0.059372561 container create b7402291b4f1be8f150c83d1a08ef1d56173168b68b50e72357e3c35786a0b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:46:02 np0005480824 systemd[1]: Started libpod-conmon-b7402291b4f1be8f150c83d1a08ef1d56173168b68b50e72357e3c35786a0b6e.scope.
Oct 10 23:46:02 np0005480824 podman[267480]: 2025-10-11 03:46:02.089184659 +0000 UTC m=+0.030137711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:46:02 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.193 2 INFO oslo.privsep.daemon [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.050 610 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.060 610 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.064 610 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.064 610 INFO oslo.privsep.daemon [-] privsep daemon running as pid 610#033[00m
Oct 10 23:46:02 np0005480824 podman[267480]: 2025-10-11 03:46:02.206122508 +0000 UTC m=+0.147075520 container init b7402291b4f1be8f150c83d1a08ef1d56173168b68b50e72357e3c35786a0b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 23:46:02 np0005480824 podman[267480]: 2025-10-11 03:46:02.215438317 +0000 UTC m=+0.156391289 container start b7402291b4f1be8f150c83d1a08ef1d56173168b68b50e72357e3c35786a0b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galileo, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:46:02 np0005480824 podman[267480]: 2025-10-11 03:46:02.220534327 +0000 UTC m=+0.161487329 container attach b7402291b4f1be8f150c83d1a08ef1d56173168b68b50e72357e3c35786a0b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galileo, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:46:02 np0005480824 vigorous_galileo[267496]: 167 167
Oct 10 23:46:02 np0005480824 systemd[1]: libpod-b7402291b4f1be8f150c83d1a08ef1d56173168b68b50e72357e3c35786a0b6e.scope: Deactivated successfully.
Oct 10 23:46:02 np0005480824 conmon[267496]: conmon b7402291b4f1be8f150c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7402291b4f1be8f150c83d1a08ef1d56173168b68b50e72357e3c35786a0b6e.scope/container/memory.events
Oct 10 23:46:02 np0005480824 podman[267480]: 2025-10-11 03:46:02.22784179 +0000 UTC m=+0.168794792 container died b7402291b4f1be8f150c83d1a08ef1d56173168b68b50e72357e3c35786a0b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 10 23:46:02 np0005480824 systemd[1]: var-lib-containers-storage-overlay-7bc976ace28ad6a98caa9a59ab2262ae0fc2530140e2c650663f90507db2d3d9-merged.mount: Deactivated successfully.
Oct 10 23:46:02 np0005480824 podman[267480]: 2025-10-11 03:46:02.271123551 +0000 UTC m=+0.212076563 container remove b7402291b4f1be8f150c83d1a08ef1d56173168b68b50e72357e3c35786a0b6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:46:02 np0005480824 systemd[1]: libpod-conmon-b7402291b4f1be8f150c83d1a08ef1d56173168b68b50e72357e3c35786a0b6e.scope: Deactivated successfully.
Oct 10 23:46:02 np0005480824 podman[267524]: 2025-10-11 03:46:02.48516982 +0000 UTC m=+0.056262108 container create 7d524be93bf9114ad6dee917e71c78eaa0df7306d75eec9b6ca2733baf55f503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:46:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 68 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.3 MiB/s wr, 137 op/s
Oct 10 23:46:02 np0005480824 systemd[1]: Started libpod-conmon-7d524be93bf9114ad6dee917e71c78eaa0df7306d75eec9b6ca2733baf55f503.scope.
Oct 10 23:46:02 np0005480824 podman[267524]: 2025-10-11 03:46:02.45890405 +0000 UTC m=+0.029996338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.563 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae972b5d-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.564 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapae972b5d-42, col_values=(('external_ids', {'iface-id': 'ae972b5d-4250-48ac-9b8a-e35678042b82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:49:57', 'vm-uuid': '349e8a73-9a19-4cee-89a9-50edc475a575'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:02 np0005480824 NetworkManager[44969]: <info>  [1760154362.5672] manager: (tapae972b5d-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:46:02 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:46:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6e71978f923fbc15f8a85fdee6f8ad115f43dc599a5ec25321d4cbaaae8549/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6e71978f923fbc15f8a85fdee6f8ad115f43dc599a5ec25321d4cbaaae8549/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:46:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6e71978f923fbc15f8a85fdee6f8ad115f43dc599a5ec25321d4cbaaae8549/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:46:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6e71978f923fbc15f8a85fdee6f8ad115f43dc599a5ec25321d4cbaaae8549/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.579 2 INFO os_vif [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:49:57,bridge_name='br-int',has_traffic_filtering=True,id=ae972b5d-4250-48ac-9b8a-e35678042b82,network=Network(fa909145-5687-40b4-825d-ce6ac3b98885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae972b5d-42')#033[00m
Oct 10 23:46:02 np0005480824 podman[267524]: 2025-10-11 03:46:02.589416988 +0000 UTC m=+0.160509256 container init 7d524be93bf9114ad6dee917e71c78eaa0df7306d75eec9b6ca2733baf55f503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 10 23:46:02 np0005480824 podman[267524]: 2025-10-11 03:46:02.5996508 +0000 UTC m=+0.170743068 container start 7d524be93bf9114ad6dee917e71c78eaa0df7306d75eec9b6ca2733baf55f503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:46:02 np0005480824 podman[267524]: 2025-10-11 03:46:02.604051854 +0000 UTC m=+0.175144142 container attach 7d524be93bf9114ad6dee917e71c78eaa0df7306d75eec9b6ca2733baf55f503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.632 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.633 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.633 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] No VIF found with MAC fa:16:3e:68:49:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.635 2 INFO nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Using config drive#033[00m
Oct 10 23:46:02 np0005480824 nova_compute[260089]: 2025-10-11 03:46:02.656 2 DEBUG nova.storage.rbd_utils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] rbd image 349e8a73-9a19-4cee-89a9-50edc475a575_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:46:03 np0005480824 nova_compute[260089]: 2025-10-11 03:46:03.257 2 INFO nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Creating config drive at /var/lib/nova/instances/349e8a73-9a19-4cee-89a9-50edc475a575/disk.config#033[00m
Oct 10 23:46:03 np0005480824 nova_compute[260089]: 2025-10-11 03:46:03.269 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/349e8a73-9a19-4cee-89a9-50edc475a575/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zefiig0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:46:03 np0005480824 great_moore[267541]: {
Oct 10 23:46:03 np0005480824 great_moore[267541]:    "0": [
Oct 10 23:46:03 np0005480824 great_moore[267541]:        {
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "devices": [
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "/dev/loop3"
Oct 10 23:46:03 np0005480824 great_moore[267541]:            ],
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_name": "ceph_lv0",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_size": "21470642176",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "name": "ceph_lv0",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "tags": {
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.cluster_name": "ceph",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.crush_device_class": "",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.encrypted": "0",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.osd_id": "0",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.type": "block",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.vdo": "0"
Oct 10 23:46:03 np0005480824 great_moore[267541]:            },
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "type": "block",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "vg_name": "ceph_vg0"
Oct 10 23:46:03 np0005480824 great_moore[267541]:        }
Oct 10 23:46:03 np0005480824 great_moore[267541]:    ],
Oct 10 23:46:03 np0005480824 great_moore[267541]:    "1": [
Oct 10 23:46:03 np0005480824 great_moore[267541]:        {
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "devices": [
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "/dev/loop4"
Oct 10 23:46:03 np0005480824 great_moore[267541]:            ],
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_name": "ceph_lv1",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_size": "21470642176",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "name": "ceph_lv1",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "tags": {
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.cluster_name": "ceph",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.crush_device_class": "",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.encrypted": "0",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.osd_id": "1",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.type": "block",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.vdo": "0"
Oct 10 23:46:03 np0005480824 great_moore[267541]:            },
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "type": "block",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "vg_name": "ceph_vg1"
Oct 10 23:46:03 np0005480824 great_moore[267541]:        }
Oct 10 23:46:03 np0005480824 great_moore[267541]:    ],
Oct 10 23:46:03 np0005480824 great_moore[267541]:    "2": [
Oct 10 23:46:03 np0005480824 great_moore[267541]:        {
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "devices": [
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "/dev/loop5"
Oct 10 23:46:03 np0005480824 great_moore[267541]:            ],
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_name": "ceph_lv2",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_size": "21470642176",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "name": "ceph_lv2",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "tags": {
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.cluster_name": "ceph",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.crush_device_class": "",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.encrypted": "0",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.osd_id": "2",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.type": "block",
Oct 10 23:46:03 np0005480824 great_moore[267541]:                "ceph.vdo": "0"
Oct 10 23:46:03 np0005480824 great_moore[267541]:            },
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "type": "block",
Oct 10 23:46:03 np0005480824 great_moore[267541]:            "vg_name": "ceph_vg2"
Oct 10 23:46:03 np0005480824 great_moore[267541]:        }
Oct 10 23:46:03 np0005480824 great_moore[267541]:    ]
Oct 10 23:46:03 np0005480824 great_moore[267541]: }
Oct 10 23:46:03 np0005480824 nova_compute[260089]: 2025-10-11 03:46:03.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:03 np0005480824 systemd[1]: libpod-7d524be93bf9114ad6dee917e71c78eaa0df7306d75eec9b6ca2733baf55f503.scope: Deactivated successfully.
Oct 10 23:46:03 np0005480824 podman[267524]: 2025-10-11 03:46:03.369200961 +0000 UTC m=+0.940293249 container died 7d524be93bf9114ad6dee917e71c78eaa0df7306d75eec9b6ca2733baf55f503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:46:03 np0005480824 systemd[1]: var-lib-containers-storage-overlay-1f6e71978f923fbc15f8a85fdee6f8ad115f43dc599a5ec25321d4cbaaae8549-merged.mount: Deactivated successfully.
Oct 10 23:46:03 np0005480824 nova_compute[260089]: 2025-10-11 03:46:03.430 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/349e8a73-9a19-4cee-89a9-50edc475a575/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zefiig0" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:46:03 np0005480824 podman[267524]: 2025-10-11 03:46:03.470368998 +0000 UTC m=+1.041461286 container remove 7d524be93bf9114ad6dee917e71c78eaa0df7306d75eec9b6ca2733baf55f503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 23:46:03 np0005480824 nova_compute[260089]: 2025-10-11 03:46:03.482 2 DEBUG nova.storage.rbd_utils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] rbd image 349e8a73-9a19-4cee-89a9-50edc475a575_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:46:03 np0005480824 systemd[1]: libpod-conmon-7d524be93bf9114ad6dee917e71c78eaa0df7306d75eec9b6ca2733baf55f503.scope: Deactivated successfully.
Oct 10 23:46:03 np0005480824 nova_compute[260089]: 2025-10-11 03:46:03.501 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/349e8a73-9a19-4cee-89a9-50edc475a575/disk.config 349e8a73-9a19-4cee-89a9-50edc475a575_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:46:03 np0005480824 nova_compute[260089]: 2025-10-11 03:46:03.720 2 DEBUG oslo_concurrency.processutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/349e8a73-9a19-4cee-89a9-50edc475a575/disk.config 349e8a73-9a19-4cee-89a9-50edc475a575_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:46:03 np0005480824 nova_compute[260089]: 2025-10-11 03:46:03.721 2 INFO nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Deleting local config drive /var/lib/nova/instances/349e8a73-9a19-4cee-89a9-50edc475a575/disk.config because it was imported into RBD.#033[00m
Oct 10 23:46:03 np0005480824 systemd[1]: Starting libvirt secret daemon...
Oct 10 23:46:03 np0005480824 systemd[1]: Started libvirt secret daemon.
Oct 10 23:46:03 np0005480824 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 10 23:46:03 np0005480824 kernel: tapae972b5d-42: entered promiscuous mode
Oct 10 23:46:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:46:03Z|00027|binding|INFO|Claiming lport ae972b5d-4250-48ac-9b8a-e35678042b82 for this chassis.
Oct 10 23:46:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:46:03Z|00028|binding|INFO|ae972b5d-4250-48ac-9b8a-e35678042b82: Claiming fa:16:3e:68:49:57 10.100.0.12
Oct 10 23:46:03 np0005480824 nova_compute[260089]: 2025-10-11 03:46:03.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:03 np0005480824 NetworkManager[44969]: <info>  [1760154363.8766] manager: (tapae972b5d-42): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Oct 10 23:46:03 np0005480824 nova_compute[260089]: 2025-10-11 03:46:03.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:03.892 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:49:57 10.100.0.12'], port_security=['fa:16:3e:68:49:57 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '349e8a73-9a19-4cee-89a9-50edc475a575', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa909145-5687-40b4-825d-ce6ac3b98885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d7871f9f8a74d2d85dc275b42df9042', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ba88d9d2-d3d6-4603-8fd8-e6dc5b5939b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78ee477e-a857-41e0-9753-6e712d707687, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=ae972b5d-4250-48ac-9b8a-e35678042b82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:46:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:03.893 162245 INFO neutron.agent.ovn.metadata.agent [-] Port ae972b5d-4250-48ac-9b8a-e35678042b82 in datapath fa909145-5687-40b4-825d-ce6ac3b98885 bound to our chassis#033[00m
Oct 10 23:46:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:03.896 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa909145-5687-40b4-825d-ce6ac3b98885#033[00m
Oct 10 23:46:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:03.897 162245 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmph48b70lz/privsep.sock']#033[00m
Oct 10 23:46:03 np0005480824 systemd-udevd[267738]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:46:03 np0005480824 NetworkManager[44969]: <info>  [1760154363.9473] device (tapae972b5d-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:46:03 np0005480824 NetworkManager[44969]: <info>  [1760154363.9483] device (tapae972b5d-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:46:03 np0005480824 systemd-machined[215071]: New machine qemu-1-instance-00000001.
Oct 10 23:46:03 np0005480824 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct 10 23:46:03 np0005480824 nova_compute[260089]: 2025-10-11 03:46:03.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:46:03Z|00029|binding|INFO|Setting lport ae972b5d-4250-48ac-9b8a-e35678042b82 ovn-installed in OVS
Oct 10 23:46:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:46:03Z|00030|binding|INFO|Setting lport ae972b5d-4250-48ac-9b8a-e35678042b82 up in Southbound
Oct 10 23:46:03 np0005480824 nova_compute[260089]: 2025-10-11 03:46:03.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:04 np0005480824 nova_compute[260089]: 2025-10-11 03:46:04.338 2 DEBUG nova.compute.manager [req-6464cb1f-32f4-4a05-80ec-9da39a93f065 req-0f8aa618-2b07-42d6-81e1-51b51d0c7f3d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Received event network-vif-plugged-ae972b5d-4250-48ac-9b8a-e35678042b82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:46:04 np0005480824 nova_compute[260089]: 2025-10-11 03:46:04.339 2 DEBUG oslo_concurrency.lockutils [req-6464cb1f-32f4-4a05-80ec-9da39a93f065 req-0f8aa618-2b07-42d6-81e1-51b51d0c7f3d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:04 np0005480824 nova_compute[260089]: 2025-10-11 03:46:04.339 2 DEBUG oslo_concurrency.lockutils [req-6464cb1f-32f4-4a05-80ec-9da39a93f065 req-0f8aa618-2b07-42d6-81e1-51b51d0c7f3d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:04 np0005480824 nova_compute[260089]: 2025-10-11 03:46:04.339 2 DEBUG oslo_concurrency.lockutils [req-6464cb1f-32f4-4a05-80ec-9da39a93f065 req-0f8aa618-2b07-42d6-81e1-51b51d0c7f3d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:04 np0005480824 nova_compute[260089]: 2025-10-11 03:46:04.339 2 DEBUG nova.compute.manager [req-6464cb1f-32f4-4a05-80ec-9da39a93f065 req-0f8aa618-2b07-42d6-81e1-51b51d0c7f3d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Processing event network-vif-plugged-ae972b5d-4250-48ac-9b8a-e35678042b82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:46:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:46:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 137 op/s
Oct 10 23:46:04 np0005480824 podman[267814]: 2025-10-11 03:46:04.514019635 +0000 UTC m=+0.084294119 container create 9c0e0811a59543e455815cfde39270c3cb36539a1a71b04fd978c99d7bb6fda0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 23:46:04 np0005480824 systemd[1]: Started libpod-conmon-9c0e0811a59543e455815cfde39270c3cb36539a1a71b04fd978c99d7bb6fda0.scope.
Oct 10 23:46:04 np0005480824 podman[267814]: 2025-10-11 03:46:04.476749886 +0000 UTC m=+0.047024420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:46:04 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:46:04 np0005480824 podman[267814]: 2025-10-11 03:46:04.629650013 +0000 UTC m=+0.199924497 container init 9c0e0811a59543e455815cfde39270c3cb36539a1a71b04fd978c99d7bb6fda0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:46:04 np0005480824 podman[267814]: 2025-10-11 03:46:04.653335631 +0000 UTC m=+0.223610085 container start 9c0e0811a59543e455815cfde39270c3cb36539a1a71b04fd978c99d7bb6fda0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 10 23:46:04 np0005480824 podman[267814]: 2025-10-11 03:46:04.656648889 +0000 UTC m=+0.226923343 container attach 9c0e0811a59543e455815cfde39270c3cb36539a1a71b04fd978c99d7bb6fda0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:46:04 np0005480824 relaxed_murdock[267871]: 167 167
Oct 10 23:46:04 np0005480824 systemd[1]: libpod-9c0e0811a59543e455815cfde39270c3cb36539a1a71b04fd978c99d7bb6fda0.scope: Deactivated successfully.
Oct 10 23:46:04 np0005480824 podman[267814]: 2025-10-11 03:46:04.662690142 +0000 UTC m=+0.232964626 container died 9c0e0811a59543e455815cfde39270c3cb36539a1a71b04fd978c99d7bb6fda0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 10 23:46:04 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:04.689 162245 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 10 23:46:04 np0005480824 systemd[1]: var-lib-containers-storage-overlay-69c68c1c3c75a61ffdf26fb011454f4b2134b9cf0ceeb732a4188df47cc6df08-merged.mount: Deactivated successfully.
Oct 10 23:46:04 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:04.691 162245 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmph48b70lz/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 10 23:46:04 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:04.496 267859 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 10 23:46:04 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:04.500 267859 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 10 23:46:04 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:04.502 267859 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct 10 23:46:04 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:04.502 267859 INFO oslo.privsep.daemon [-] privsep daemon running as pid 267859#033[00m
Oct 10 23:46:04 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:04.699 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[878d6ebb-abc7-458c-a92c-6b8b5aba8e43]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:04 np0005480824 podman[267814]: 2025-10-11 03:46:04.705672676 +0000 UTC m=+0.275947120 container remove 9c0e0811a59543e455815cfde39270c3cb36539a1a71b04fd978c99d7bb6fda0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:46:04 np0005480824 systemd[1]: libpod-conmon-9c0e0811a59543e455815cfde39270c3cb36539a1a71b04fd978c99d7bb6fda0.scope: Deactivated successfully.
Oct 10 23:46:04 np0005480824 podman[267901]: 2025-10-11 03:46:04.911181733 +0000 UTC m=+0.057539038 container create 3b215892e68f3db781936cca2af83c162288b3faf8f5ca76996551774487ba8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 10 23:46:04 np0005480824 systemd[1]: Started libpod-conmon-3b215892e68f3db781936cca2af83c162288b3faf8f5ca76996551774487ba8e.scope.
Oct 10 23:46:04 np0005480824 podman[267901]: 2025-10-11 03:46:04.888447596 +0000 UTC m=+0.034804911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:46:05 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:46:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c524895665ea2f6190306b3b64861f1931714c9b479ebd2b5753933e0521d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:46:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c524895665ea2f6190306b3b64861f1931714c9b479ebd2b5753933e0521d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:46:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c524895665ea2f6190306b3b64861f1931714c9b479ebd2b5753933e0521d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:46:05 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c524895665ea2f6190306b3b64861f1931714c9b479ebd2b5753933e0521d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:46:05 np0005480824 podman[267901]: 2025-10-11 03:46:05.032568316 +0000 UTC m=+0.178925671 container init 3b215892e68f3db781936cca2af83c162288b3faf8f5ca76996551774487ba8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:46:05 np0005480824 podman[267901]: 2025-10-11 03:46:05.046506245 +0000 UTC m=+0.192863540 container start 3b215892e68f3db781936cca2af83c162288b3faf8f5ca76996551774487ba8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 10 23:46:05 np0005480824 podman[267901]: 2025-10-11 03:46:05.049622628 +0000 UTC m=+0.195979943 container attach 3b215892e68f3db781936cca2af83c162288b3faf8f5ca76996551774487ba8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.149 2 DEBUG nova.compute.manager [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.151 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154365.1488128, 349e8a73-9a19-4cee-89a9-50edc475a575 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.152 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] VM Started (Lifecycle Event)#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.166 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.171 2 INFO nova.virt.libvirt.driver [-] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Instance spawned successfully.#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.171 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.187 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.196 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.203 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.203 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.204 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.205 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.206 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.206 2 DEBUG nova.virt.libvirt.driver [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.220 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.220 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154365.1505005, 349e8a73-9a19-4cee-89a9-50edc475a575 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.221 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.254 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.258 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154365.1574962, 349e8a73-9a19-4cee-89a9-50edc475a575 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.259 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.291 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.294 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.315 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.328 2 INFO nova.compute.manager [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Took 10.03 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.329 2 DEBUG nova.compute.manager [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.435 2 INFO nova.compute.manager [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Took 11.04 seconds to build instance.#033[00m
Oct 10 23:46:05 np0005480824 nova_compute[260089]: 2025-10-11 03:46:05.453 2 DEBUG oslo_concurrency.lockutils [None req-58225974-b970-4c3e-94f3-68878f5c6165 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:05.564 267859 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:05.564 267859 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:05.564 267859 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]: {
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "osd_id": 0,
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "type": "bluestore"
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:    },
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "osd_id": 1,
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "type": "bluestore"
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:    },
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "osd_id": 2,
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:        "type": "bluestore"
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]:    }
Oct 10 23:46:06 np0005480824 hungry_feistel[267919]: }
Oct 10 23:46:06 np0005480824 systemd[1]: libpod-3b215892e68f3db781936cca2af83c162288b3faf8f5ca76996551774487ba8e.scope: Deactivated successfully.
Oct 10 23:46:06 np0005480824 systemd[1]: libpod-3b215892e68f3db781936cca2af83c162288b3faf8f5ca76996551774487ba8e.scope: Consumed 1.161s CPU time.
Oct 10 23:46:06 np0005480824 podman[267901]: 2025-10-11 03:46:06.239950886 +0000 UTC m=+1.386308201 container died 3b215892e68f3db781936cca2af83c162288b3faf8f5ca76996551774487ba8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:46:06 np0005480824 systemd[1]: var-lib-containers-storage-overlay-52c524895665ea2f6190306b3b64861f1931714c9b479ebd2b5753933e0521d7-merged.mount: Deactivated successfully.
Oct 10 23:46:06 np0005480824 podman[267901]: 2025-10-11 03:46:06.310582901 +0000 UTC m=+1.456940196 container remove 3b215892e68f3db781936cca2af83c162288b3faf8f5ca76996551774487ba8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:46:06 np0005480824 systemd[1]: libpod-conmon-3b215892e68f3db781936cca2af83c162288b3faf8f5ca76996551774487ba8e.scope: Deactivated successfully.
Oct 10 23:46:06 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:06.340 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c1849fcd-eb91-4429-8b8e-1c4210a92c72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:06 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:06.344 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfa909145-51 in ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:46:06 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:06.348 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfa909145-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:46:06 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:06.348 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[6e09bc54-ad9a-4642-ade1-2363753f1a72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:06 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:06.353 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[215abfc5-7b9f-4f5c-a28c-8b356f023c78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:46:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:46:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:46:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:46:06 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 5ab84358-88de-4df0-ab07-ab8dce0dd992 does not exist
Oct 10 23:46:06 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev f01900d1-b4b9-4ca5-8a9c-82ab752d8b99 does not exist
Oct 10 23:46:06 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:06.401 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[26fa25d8-2886-4b86-a6c1-ae22757a2e49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:06 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:06.445 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[216e308c-73dd-4878-a3c6-947f2b6d9038]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:06 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:06.449 162245 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp8g8ifqko/privsep.sock']#033[00m
Oct 10 23:46:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.7 MiB/s wr, 60 op/s
Oct 10 23:46:06 np0005480824 nova_compute[260089]: 2025-10-11 03:46:06.621 2 DEBUG nova.compute.manager [req-70e7f5f5-c314-4b2e-803b-4d3f6602618f req-e396fa6a-63d7-4be7-8476-7027bc297396 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Received event network-vif-plugged-ae972b5d-4250-48ac-9b8a-e35678042b82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:46:06 np0005480824 nova_compute[260089]: 2025-10-11 03:46:06.622 2 DEBUG oslo_concurrency.lockutils [req-70e7f5f5-c314-4b2e-803b-4d3f6602618f req-e396fa6a-63d7-4be7-8476-7027bc297396 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:06 np0005480824 nova_compute[260089]: 2025-10-11 03:46:06.622 2 DEBUG oslo_concurrency.lockutils [req-70e7f5f5-c314-4b2e-803b-4d3f6602618f req-e396fa6a-63d7-4be7-8476-7027bc297396 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:06 np0005480824 nova_compute[260089]: 2025-10-11 03:46:06.623 2 DEBUG oslo_concurrency.lockutils [req-70e7f5f5-c314-4b2e-803b-4d3f6602618f req-e396fa6a-63d7-4be7-8476-7027bc297396 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:06 np0005480824 nova_compute[260089]: 2025-10-11 03:46:06.623 2 DEBUG nova.compute.manager [req-70e7f5f5-c314-4b2e-803b-4d3f6602618f req-e396fa6a-63d7-4be7-8476-7027bc297396 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] No waiting events found dispatching network-vif-plugged-ae972b5d-4250-48ac-9b8a-e35678042b82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:46:06 np0005480824 nova_compute[260089]: 2025-10-11 03:46:06.623 2 WARNING nova.compute.manager [req-70e7f5f5-c314-4b2e-803b-4d3f6602618f req-e396fa6a-63d7-4be7-8476-7027bc297396 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Received unexpected event network-vif-plugged-ae972b5d-4250-48ac-9b8a-e35678042b82 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:46:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:07.271 162245 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 10 23:46:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:07.272 162245 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp8g8ifqko/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 10 23:46:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:07.069 268023 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 10 23:46:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:07.079 268023 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 10 23:46:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:07.084 268023 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct 10 23:46:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:07.085 268023 INFO oslo.privsep.daemon [-] privsep daemon running as pid 268023#033[00m
Oct 10 23:46:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:07.277 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[164abbe6-be6c-49ec-872e-7ecd4a1ff6bb]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:07 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:46:07 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:46:07 np0005480824 nova_compute[260089]: 2025-10-11 03:46:07.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:07.812 268023 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:07.812 268023 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:07.812 268023 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:08 np0005480824 nova_compute[260089]: 2025-10-11 03:46:08.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.428 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[7307fa8c-7f4a-46b5-b3a5-6d44eb047c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:08 np0005480824 NetworkManager[44969]: <info>  [1760154368.4425] manager: (tapfa909145-50): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.435 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f7abf4-d93c-47b5-833e-ecdbcdf1de3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:08 np0005480824 systemd-udevd[268033]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.490 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[e88f4b9a-5913-44c0-870b-a91663a55f84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.494 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[e2f23c07-acf0-4c23-9cf0-31fa8c1b112a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct 10 23:46:08 np0005480824 NetworkManager[44969]: <info>  [1760154368.5335] device (tapfa909145-50): carrier: link connected
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.541 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[de90d9b6-711d-420f-9ff2-7f88a4e2916c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.571 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[42adde60-1493-48cf-90f4-8891c26172f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa909145-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:a8:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383619, 'reachable_time': 26036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268052, 'error': None, 'target': 'ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.591 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2d97d009-ee70-4ccc-9fc8-c844c77fdc18]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:a8b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383619, 'tstamp': 383619}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268053, 'error': None, 'target': 'ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.613 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[278297b9-823e-4f45-b4de-577f5da78754]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa909145-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:a8:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383619, 'reachable_time': 26036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268054, 'error': None, 'target': 'ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.651 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e428edeb-34fe-4de2-b983-201d66b24a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:08 np0005480824 NetworkManager[44969]: <info>  [1760154368.7135] manager: (patch-br-int-to-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Oct 10 23:46:08 np0005480824 NetworkManager[44969]: <info>  [1760154368.7138] device (patch-br-int-to-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:46:08 np0005480824 NetworkManager[44969]: <info>  [1760154368.7148] manager: (patch-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Oct 10 23:46:08 np0005480824 NetworkManager[44969]: <info>  [1760154368.7151] device (patch-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 23:46:08 np0005480824 NetworkManager[44969]: <info>  [1760154368.7158] manager: (patch-br-int-to-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 10 23:46:08 np0005480824 NetworkManager[44969]: <info>  [1760154368.7162] manager: (patch-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Oct 10 23:46:08 np0005480824 NetworkManager[44969]: <info>  [1760154368.7166] device (patch-br-int-to-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 10 23:46:08 np0005480824 NetworkManager[44969]: <info>  [1760154368.7168] device (patch-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 10 23:46:08 np0005480824 nova_compute[260089]: 2025-10-11 03:46:08.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.762 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9f4737-4717-4859-b3ce-02a38da6934b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.764 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa909145-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.764 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.765 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa909145-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:46:08 np0005480824 NetworkManager[44969]: <info>  [1760154368.7689] manager: (tapfa909145-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct 10 23:46:08 np0005480824 nova_compute[260089]: 2025-10-11 03:46:08.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:08 np0005480824 kernel: tapfa909145-50: entered promiscuous mode
Oct 10 23:46:08 np0005480824 nova_compute[260089]: 2025-10-11 03:46:08.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.829 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa909145-50, col_values=(('external_ids', {'iface-id': 'f39d3c11-4908-476b-95d2-6fc85e558e6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:46:08 np0005480824 nova_compute[260089]: 2025-10-11 03:46:08.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:08 np0005480824 nova_compute[260089]: 2025-10-11 03:46:08.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.842 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fa909145-5687-40b4-825d-ce6ac3b98885.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fa909145-5687-40b4-825d-ce6ac3b98885.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:46:08 np0005480824 ovn_controller[152667]: 2025-10-11T03:46:08Z|00031|binding|INFO|Releasing lport f39d3c11-4908-476b-95d2-6fc85e558e6b from this chassis (sb_readonly=0)
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.843 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[42ce3a8b-1d86-4b3d-a018-ffdf1e07d3a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.845 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-fa909145-5687-40b4-825d-ce6ac3b98885
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/fa909145-5687-40b4-825d-ce6ac3b98885.pid.haproxy
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID fa909145-5687-40b4-825d-ce6ac3b98885
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:46:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:08.845 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885', 'env', 'PROCESS_TAG=haproxy-fa909145-5687-40b4-825d-ce6ac3b98885', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fa909145-5687-40b4-825d-ce6ac3b98885.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:46:08 np0005480824 nova_compute[260089]: 2025-10-11 03:46:08.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:08 np0005480824 nova_compute[260089]: 2025-10-11 03:46:08.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2199486887' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2199486887' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:09 np0005480824 podman[268084]: 2025-10-11 03:46:09.363536384 +0000 UTC m=+0.091311406 container create 605b9e124ec78e77c1d466f102fa58549f8d1da449e57f832ba62d4eb0c49257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 10 23:46:09 np0005480824 podman[268084]: 2025-10-11 03:46:09.314159738 +0000 UTC m=+0.041934830 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:46:09 np0005480824 systemd[1]: Started libpod-conmon-605b9e124ec78e77c1d466f102fa58549f8d1da449e57f832ba62d4eb0c49257.scope.
Oct 10 23:46:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:46:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct 10 23:46:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct 10 23:46:09 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct 10 23:46:09 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:46:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41fed3190f4d4e6884ed908620b281eff4bff5d70fbca71ca1b4cc825940080/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:46:09 np0005480824 podman[268084]: 2025-10-11 03:46:09.508297698 +0000 UTC m=+0.236072770 container init 605b9e124ec78e77c1d466f102fa58549f8d1da449e57f832ba62d4eb0c49257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 10 23:46:09 np0005480824 nova_compute[260089]: 2025-10-11 03:46:09.516 2 DEBUG nova.compute.manager [req-30b239de-4d73-4185-8f5e-9e011addf1f6 req-43eb4f27-ddc6-4178-90a9-4fc435518939 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Received event network-changed-ae972b5d-4250-48ac-9b8a-e35678042b82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:46:09 np0005480824 nova_compute[260089]: 2025-10-11 03:46:09.517 2 DEBUG nova.compute.manager [req-30b239de-4d73-4185-8f5e-9e011addf1f6 req-43eb4f27-ddc6-4178-90a9-4fc435518939 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Refreshing instance network info cache due to event network-changed-ae972b5d-4250-48ac-9b8a-e35678042b82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:46:09 np0005480824 nova_compute[260089]: 2025-10-11 03:46:09.517 2 DEBUG oslo_concurrency.lockutils [req-30b239de-4d73-4185-8f5e-9e011addf1f6 req-43eb4f27-ddc6-4178-90a9-4fc435518939 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-349e8a73-9a19-4cee-89a9-50edc475a575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:46:09 np0005480824 nova_compute[260089]: 2025-10-11 03:46:09.517 2 DEBUG oslo_concurrency.lockutils [req-30b239de-4d73-4185-8f5e-9e011addf1f6 req-43eb4f27-ddc6-4178-90a9-4fc435518939 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-349e8a73-9a19-4cee-89a9-50edc475a575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:46:09 np0005480824 nova_compute[260089]: 2025-10-11 03:46:09.517 2 DEBUG nova.network.neutron [req-30b239de-4d73-4185-8f5e-9e011addf1f6 req-43eb4f27-ddc6-4178-90a9-4fc435518939 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Refreshing network info cache for port ae972b5d-4250-48ac-9b8a-e35678042b82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:46:09 np0005480824 podman[268084]: 2025-10-11 03:46:09.52026889 +0000 UTC m=+0.248043912 container start 605b9e124ec78e77c1d466f102fa58549f8d1da449e57f832ba62d4eb0c49257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:46:09 np0005480824 neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885[268097]: [NOTICE]   (268101) : New worker (268103) forked
Oct 10 23:46:09 np0005480824 neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885[268097]: [NOTICE]   (268101) : Loading success.
Oct 10 23:46:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3325980140' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3325980140' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:10.485 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:10.486 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:10.487 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct 10 23:46:10 np0005480824 nova_compute[260089]: 2025-10-11 03:46:10.779 2 DEBUG nova.network.neutron [req-30b239de-4d73-4185-8f5e-9e011addf1f6 req-43eb4f27-ddc6-4178-90a9-4fc435518939 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Updated VIF entry in instance network info cache for port ae972b5d-4250-48ac-9b8a-e35678042b82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:46:10 np0005480824 nova_compute[260089]: 2025-10-11 03:46:10.780 2 DEBUG nova.network.neutron [req-30b239de-4d73-4185-8f5e-9e011addf1f6 req-43eb4f27-ddc6-4178-90a9-4fc435518939 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Updating instance_info_cache with network_info: [{"id": "ae972b5d-4250-48ac-9b8a-e35678042b82", "address": "fa:16:3e:68:49:57", "network": {"id": "fa909145-5687-40b4-825d-ce6ac3b98885", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-311959019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d7871f9f8a74d2d85dc275b42df9042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae972b5d-42", "ovs_interfaceid": "ae972b5d-4250-48ac-9b8a-e35678042b82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:46:10 np0005480824 nova_compute[260089]: 2025-10-11 03:46:10.807 2 DEBUG oslo_concurrency.lockutils [req-30b239de-4d73-4185-8f5e-9e011addf1f6 req-43eb4f27-ddc6-4178-90a9-4fc435518939 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-349e8a73-9a19-4cee-89a9-50edc475a575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:46:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 120 op/s
Oct 10 23:46:12 np0005480824 nova_compute[260089]: 2025-10-11 03:46:12.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:13 np0005480824 nova_compute[260089]: 2025-10-11 03:46:13.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/170086986' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/170086986' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:46:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 106 op/s
Oct 10 23:46:15 np0005480824 podman[268114]: 2025-10-11 03:46:15.070975098 +0000 UTC m=+0.109483094 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:46:15 np0005480824 podman[268113]: 2025-10-11 03:46:15.07236879 +0000 UTC m=+0.113372015 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 10 23:46:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/21755236' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/21755236' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 106 op/s
Oct 10 23:46:17 np0005480824 nova_compute[260089]: 2025-10-11 03:46:17.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:17 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 10 23:46:18 np0005480824 nova_compute[260089]: 2025-10-11 03:46:18.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 1.8 KiB/s wr, 37 op/s
Oct 10 23:46:19 np0005480824 ovn_controller[152667]: 2025-10-11T03:46:19Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:49:57 10.100.0.12
Oct 10 23:46:19 np0005480824 ovn_controller[152667]: 2025-10-11T03:46:19Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:49:57 10.100.0.12
Oct 10 23:46:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:46:19 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 10 23:46:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:20.203 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:46:20 np0005480824 nova_compute[260089]: 2025-10-11 03:46:20.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:20.206 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:46:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 1.6 KiB/s wr, 34 op/s
Oct 10 23:46:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:21 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3813213954' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:21 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3813213954' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:22.208 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:46:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 109 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 311 KiB/s rd, 1.6 MiB/s wr, 111 op/s
Oct 10 23:46:22 np0005480824 nova_compute[260089]: 2025-10-11 03:46:22.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:23 np0005480824 podman[268156]: 2025-10-11 03:46:23.104246213 +0000 UTC m=+0.152414707 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Oct 10 23:46:23 np0005480824 nova_compute[260089]: 2025-10-11 03:46:23.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:24 np0005480824 nova_compute[260089]: 2025-10-11 03:46:24.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.457233) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154384457348, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1269, "num_deletes": 263, "total_data_size": 1640812, "memory_usage": 1664000, "flush_reason": "Manual Compaction"}
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154384472314, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1620309, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18649, "largest_seqno": 19917, "table_properties": {"data_size": 1614184, "index_size": 3392, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13207, "raw_average_key_size": 19, "raw_value_size": 1601573, "raw_average_value_size": 2408, "num_data_blocks": 151, "num_entries": 665, "num_filter_entries": 665, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154299, "oldest_key_time": 1760154299, "file_creation_time": 1760154384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 15118 microseconds, and 8191 cpu microseconds.
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.472365) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1620309 bytes OK
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.472390) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.474146) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.474161) EVENT_LOG_v1 {"time_micros": 1760154384474155, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.474182) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1634870, prev total WAL file size 1634870, number of live WAL files 2.
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.474958) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1582KB)], [44(5945KB)]
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154384474989, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7708280, "oldest_snapshot_seqno": -1}
Oct 10 23:46:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4301 keys, 7575087 bytes, temperature: kUnknown
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154384513197, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7575087, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7544829, "index_size": 18372, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 106393, "raw_average_key_size": 24, "raw_value_size": 7465447, "raw_average_value_size": 1735, "num_data_blocks": 771, "num_entries": 4301, "num_filter_entries": 4301, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760154384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.513651) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7575087 bytes
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.515682) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.1 rd, 197.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 5.8 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(9.4) write-amplify(4.7) OK, records in: 4837, records dropped: 536 output_compression: NoCompression
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.515731) EVENT_LOG_v1 {"time_micros": 1760154384515710, "job": 22, "event": "compaction_finished", "compaction_time_micros": 38328, "compaction_time_cpu_micros": 22071, "output_level": 6, "num_output_files": 1, "total_output_size": 7575087, "num_input_records": 4837, "num_output_records": 4301, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154384516737, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154384519138, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.474851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.519229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.519235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.519238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.519242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:46:24.519245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3508453512' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3508453512' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:24 np0005480824 nova_compute[260089]: 2025-10-11 03:46:24.649 2 DEBUG oslo_concurrency.lockutils [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Acquiring lock "349e8a73-9a19-4cee-89a9-50edc475a575" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:24 np0005480824 nova_compute[260089]: 2025-10-11 03:46:24.650 2 DEBUG oslo_concurrency.lockutils [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:24 np0005480824 nova_compute[260089]: 2025-10-11 03:46:24.672 2 DEBUG nova.objects.instance [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lazy-loading 'flavor' on Instance uuid 349e8a73-9a19-4cee-89a9-50edc475a575 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:46:24 np0005480824 nova_compute[260089]: 2025-10-11 03:46:24.711 2 INFO nova.virt.libvirt.driver [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Ignoring supplied device name: /dev/vdb#033[00m
Oct 10 23:46:24 np0005480824 nova_compute[260089]: 2025-10-11 03:46:24.728 2 DEBUG oslo_concurrency.lockutils [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:24 np0005480824 nova_compute[260089]: 2025-10-11 03:46:24.942 2 DEBUG oslo_concurrency.lockutils [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Acquiring lock "349e8a73-9a19-4cee-89a9-50edc475a575" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:24 np0005480824 nova_compute[260089]: 2025-10-11 03:46:24.943 2 DEBUG oslo_concurrency.lockutils [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:24 np0005480824 nova_compute[260089]: 2025-10-11 03:46:24.944 2 INFO nova.compute.manager [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Attaching volume 3e0695d3-0aeb-4c90-804b-6098401a9775 to /dev/vdb#033[00m
Oct 10 23:46:25 np0005480824 nova_compute[260089]: 2025-10-11 03:46:25.220 2 DEBUG os_brick.utils [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:46:25 np0005480824 nova_compute[260089]: 2025-10-11 03:46:25.222 2 INFO oslo.privsep.daemon [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmplcy3_gw7/privsep.sock']#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.020 2 INFO oslo.privsep.daemon [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:25.878 676 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:25.885 676 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:25.889 676 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:25.889 676 INFO oslo.privsep.daemon [-] privsep daemon running as pid 676#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.026 676 DEBUG oslo.privsep.daemon [-] privsep: reply[ca36c995-3baa-4401-af3a-ed2a9b835d4f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.127 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.155 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.156 676 DEBUG oslo.privsep.daemon [-] privsep: reply[5197fc2d-498c-4e87-93e9-78a5ac5cea3d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.159 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.169 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.169 676 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6a5e71-be7b-427b-9ef9-75d591a98049]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.174 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.191 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.192 676 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5fc620-999d-4b5a-8137-bda0e7a8c52a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.195 676 DEBUG oslo.privsep.daemon [-] privsep: reply[4757b6f2-017b-4834-b734-63cb8acefba3]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.196 2 DEBUG oslo_concurrency.processutils [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.236 2 DEBUG oslo_concurrency.processutils [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.243 2 DEBUG os_brick.initiator.connectors.lightos [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.245 2 DEBUG os_brick.initiator.connectors.lightos [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.246 2 DEBUG os_brick.initiator.connectors.lightos [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.246 2 DEBUG os_brick.utils [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] <== get_connector_properties: return (1025ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.247 2 DEBUG nova.virt.block_device [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Updating existing volume attachment record: 838fe166-f78a-4e86-8658-371b19254df6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.293 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:46:26 np0005480824 nova_compute[260089]: 2025-10-11 03:46:26.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:46:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Oct 10 23:46:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:46:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3701520743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.481 2 DEBUG os_brick.encryptors [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Using volume encryption metadata '{'encryption_key_id': '25309e87-897e-4c80-8589-6ced7815e1f9', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3e0695d3-0aeb-4c90-804b-6098401a9775', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '349e8a73-9a19-4cee-89a9-50edc475a575', 'attached_at': '', 'detached_at': '', 'volume_id': '3e0695d3-0aeb-4c90-804b-6098401a9775', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.486 2 DEBUG oslo_concurrency.lockutils [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.486 2 DEBUG oslo_concurrency.lockutils [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.489 2 DEBUG oslo_concurrency.lockutils [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.503 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.532 2 DEBUG barbicanclient.v1.secrets [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/25309e87-897e-4c80-8589-6ced7815e1f9 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.533 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.557 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.558 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.578 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.579 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.614 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.615 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.637 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.638 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.664 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.665 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.685 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.686 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.739 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.740 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.776 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.777 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.803 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.804 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.848 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.849 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.874 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.875 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:46:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:46:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:46:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:46:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:46:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.899 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.900 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:46:27
Oct 10 23:46:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:46:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:46:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'vms', 'volumes', 'default.rgw.control', 'default.rgw.meta']
Oct 10 23:46:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.925 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.926 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.945 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.946 2 INFO barbicanclient.base [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Calculated Secrets uuid ref: secrets/25309e87-897e-4c80-8589-6ced7815e1f9#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.964 2 DEBUG barbicanclient.client [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.965 2 DEBUG nova.virt.libvirt.host [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 10 23:46:27 np0005480824 nova_compute[260089]:  <usage type="volume">
Oct 10 23:46:27 np0005480824 nova_compute[260089]:    <volume>3e0695d3-0aeb-4c90-804b-6098401a9775</volume>
Oct 10 23:46:27 np0005480824 nova_compute[260089]:  </usage>
Oct 10 23:46:27 np0005480824 nova_compute[260089]: </secret>
Oct 10 23:46:27 np0005480824 nova_compute[260089]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct 10 23:46:27 np0005480824 nova_compute[260089]: 2025-10-11 03:46:27.980 2 DEBUG nova.objects.instance [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lazy-loading 'flavor' on Instance uuid 349e8a73-9a19-4cee-89a9-50edc475a575 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:46:28 np0005480824 nova_compute[260089]: 2025-10-11 03:46:28.003 2 DEBUG nova.virt.libvirt.driver [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Attempting to attach volume 3e0695d3-0aeb-4c90-804b-6098401a9775 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct 10 23:46:28 np0005480824 nova_compute[260089]: 2025-10-11 03:46:28.006 2 DEBUG nova.virt.libvirt.guest [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] attach device xml: <disk type="network" device="disk">
Oct 10 23:46:28 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:46:28 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775">
Oct 10 23:46:28 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:46:28 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:46:28 np0005480824 nova_compute[260089]:  <auth username="openstack">
Oct 10 23:46:28 np0005480824 nova_compute[260089]:    <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:46:28 np0005480824 nova_compute[260089]:  </auth>
Oct 10 23:46:28 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:46:28 np0005480824 nova_compute[260089]:  <serial>3e0695d3-0aeb-4c90-804b-6098401a9775</serial>
Oct 10 23:46:28 np0005480824 nova_compute[260089]:  <encryption format="luks">
Oct 10 23:46:28 np0005480824 nova_compute[260089]:    <secret type="passphrase" uuid="d9078c25-6384-48e5-ab91-a7e62421ec91"/>
Oct 10 23:46:28 np0005480824 nova_compute[260089]:  </encryption>
Oct 10 23:46:28 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:46:28 np0005480824 nova_compute[260089]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 10 23:46:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:46:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:46:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:46:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:46:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:46:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:46:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:46:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:46:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:46:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:46:28 np0005480824 nova_compute[260089]: 2025-10-11 03:46:28.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:46:28 np0005480824 nova_compute[260089]: 2025-10-11 03:46:28.298 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:46:28 np0005480824 nova_compute[260089]: 2025-10-11 03:46:28.299 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:46:28 np0005480824 nova_compute[260089]: 2025-10-11 03:46:28.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 424 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Oct 10 23:46:28 np0005480824 nova_compute[260089]: 2025-10-11 03:46:28.616 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "refresh_cache-349e8a73-9a19-4cee-89a9-50edc475a575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:46:28 np0005480824 nova_compute[260089]: 2025-10-11 03:46:28.617 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquired lock "refresh_cache-349e8a73-9a19-4cee-89a9-50edc475a575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:46:28 np0005480824 nova_compute[260089]: 2025-10-11 03:46:28.617 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 10 23:46:28 np0005480824 nova_compute[260089]: 2025-10-11 03:46:28.617 2 DEBUG nova.objects.instance [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 349e8a73-9a19-4cee-89a9-50edc475a575 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:46:29 np0005480824 podman[268215]: 2025-10-11 03:46:29.023551613 +0000 UTC m=+0.084271679 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 10 23:46:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3443989649' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3443989649' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:46:29 np0005480824 nova_compute[260089]: 2025-10-11 03:46:29.556 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Updating instance_info_cache with network_info: [{"id": "ae972b5d-4250-48ac-9b8a-e35678042b82", "address": "fa:16:3e:68:49:57", "network": {"id": "fa909145-5687-40b4-825d-ce6ac3b98885", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-311959019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d7871f9f8a74d2d85dc275b42df9042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae972b5d-42", "ovs_interfaceid": "ae972b5d-4250-48ac-9b8a-e35678042b82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:46:29 np0005480824 nova_compute[260089]: 2025-10-11 03:46:29.573 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Releasing lock "refresh_cache-349e8a73-9a19-4cee-89a9-50edc475a575" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:46:29 np0005480824 nova_compute[260089]: 2025-10-11 03:46:29.573 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 10 23:46:29 np0005480824 nova_compute[260089]: 2025-10-11 03:46:29.574 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:46:29 np0005480824 nova_compute[260089]: 2025-10-11 03:46:29.575 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:46:29 np0005480824 nova_compute[260089]: 2025-10-11 03:46:29.575 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:46:29 np0005480824 nova_compute[260089]: 2025-10-11 03:46:29.598 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:29 np0005480824 nova_compute[260089]: 2025-10-11 03:46:29.599 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:29 np0005480824 nova_compute[260089]: 2025-10-11 03:46:29.600 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:29 np0005480824 nova_compute[260089]: 2025-10-11 03:46:29.600 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:46:29 np0005480824 nova_compute[260089]: 2025-10-11 03:46:29.601 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:46:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:46:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/57230100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.085 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.176 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.176 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.410 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.412 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4622MB free_disk=59.94271469116211GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.413 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.413 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.495 2 DEBUG nova.virt.libvirt.driver [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.496 2 DEBUG nova.virt.libvirt.driver [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.496 2 DEBUG nova.virt.libvirt.driver [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.497 2 DEBUG nova.virt.libvirt.driver [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] No VIF found with MAC fa:16:3e:68:49:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.501 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 349e8a73-9a19-4cee-89a9-50edc475a575 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.501 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.501 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:46:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 410 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.547 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:46:30 np0005480824 nova_compute[260089]: 2025-10-11 03:46:30.862 2 DEBUG oslo_concurrency.lockutils [None req-698dae5b-cb4c-441f-b6a7-57cfc35c083e f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:46:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1837633138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.050 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.061 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updating inventory in ProviderTree for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.111 2 ERROR nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [req-ffd14d0e-a918-4d6f-abe7-33019d9bff0b] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-ffd14d0e-a918-4d6f-abe7-33019d9bff0b"}]}#033[00m
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.134 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing inventories for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.153 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updating ProviderTree inventory for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.155 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updating inventory in ProviderTree for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.174 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing aggregate associations for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.210 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing trait associations for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX,HW_CPU_X86_SHA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.269 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:46:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4283808684' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4283808684' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:46:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3972261141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.762 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:46:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1396820497' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1396820497' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.771 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updating inventory in ProviderTree for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.826 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updated inventory for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.826 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updating resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.827 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updating inventory in ProviderTree for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.855 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:46:31 np0005480824 nova_compute[260089]: 2025-10-11 03:46:31.856 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 447 KiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct 10 23:46:32 np0005480824 nova_compute[260089]: 2025-10-11 03:46:32.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2882363715' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2882363715' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:33 np0005480824 nova_compute[260089]: 2025-10-11 03:46:33.217 2 DEBUG nova.compute.manager [req-36a44404-8493-459d-9f1c-b6ceb73f4a73 req-8b43deea-0c1d-4751-9db1-cddb590f28fd e164ff95c6c84a77b0287b454f7aa48c 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Received event volume-extended-3e0695d3-0aeb-4c90-804b-6098401a9775 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:46:33 np0005480824 nova_compute[260089]: 2025-10-11 03:46:33.238 2 DEBUG nova.compute.manager [req-36a44404-8493-459d-9f1c-b6ceb73f4a73 req-8b43deea-0c1d-4751-9db1-cddb590f28fd e164ff95c6c84a77b0287b454f7aa48c 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Handling volume-extended event for volume 3e0695d3-0aeb-4c90-804b-6098401a9775 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896#033[00m
Oct 10 23:46:33 np0005480824 nova_compute[260089]: 2025-10-11 03:46:33.260 2 INFO nova.compute.manager [req-36a44404-8493-459d-9f1c-b6ceb73f4a73 req-8b43deea-0c1d-4751-9db1-cddb590f28fd e164ff95c6c84a77b0287b454f7aa48c 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Cinder extended volume 3e0695d3-0aeb-4c90-804b-6098401a9775; extending it to detect new size#033[00m
Oct 10 23:46:33 np0005480824 nova_compute[260089]: 2025-10-11 03:46:33.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:33 np0005480824 nova_compute[260089]: 2025-10-11 03:46:33.533 2 DEBUG os_brick.encryptors [req-36a44404-8493-459d-9f1c-b6ceb73f4a73 req-8b43deea-0c1d-4751-9db1-cddb590f28fd e164ff95c6c84a77b0287b454f7aa48c 138b87de781b4b829b248ab2d1714fea - - default default] Using volume encryption metadata '{'encryption_key_id': '25309e87-897e-4c80-8589-6ced7815e1f9', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3e0695d3-0aeb-4c90-804b-6098401a9775', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '349e8a73-9a19-4cee-89a9-50edc475a575', 'attached_at': '', 'detached_at': '', 'volume_id': '3e0695d3-0aeb-4c90-804b-6098401a9775', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct 10 23:46:33 np0005480824 nova_compute[260089]: 2025-10-11 03:46:33.535 2 INFO oslo.privsep.daemon [req-36a44404-8493-459d-9f1c-b6ceb73f4a73 req-8b43deea-0c1d-4751-9db1-cddb590f28fd e164ff95c6c84a77b0287b454f7aa48c 138b87de781b4b829b248ab2d1714fea - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpb19gkq7f/privsep.sock']#033[00m
Oct 10 23:46:34 np0005480824 nova_compute[260089]: 2025-10-11 03:46:34.304 2 INFO oslo.privsep.daemon [req-36a44404-8493-459d-9f1c-b6ceb73f4a73 req-8b43deea-0c1d-4751-9db1-cddb590f28fd e164ff95c6c84a77b0287b454f7aa48c 138b87de781b4b829b248ab2d1714fea - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct 10 23:46:34 np0005480824 nova_compute[260089]: 2025-10-11 03:46:34.130 755 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 10 23:46:34 np0005480824 nova_compute[260089]: 2025-10-11 03:46:34.134 755 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 10 23:46:34 np0005480824 nova_compute[260089]: 2025-10-11 03:46:34.136 755 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct 10 23:46:34 np0005480824 nova_compute[260089]: 2025-10-11 03:46:34.137 755 INFO oslo.privsep.daemon [-] privsep daemon running as pid 755#033[00m
Oct 10 23:46:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:46:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 200 KiB/s rd, 594 KiB/s wr, 51 op/s
Oct 10 23:46:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1593666409' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1593666409' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:34 np0005480824 nova_compute[260089]: 2025-10-11 03:46:34.579 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:46:34 np0005480824 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct 10 23:46:34 np0005480824 systemd[1]: Started Process Core Dump (PID 268326/UID 0).
Oct 10 23:46:35 np0005480824 systemd-coredump[268327]: Process 268307 (qemu-img) of user 0 dumped core.#012#012Stack trace of thread 767:#012#0  0x00007f065dd3003c __pthread_kill_implementation (libc.so.6 + 0x8d03c)#012#1  0x00007f065dce2b86 raise (libc.so.6 + 0x3fb86)#012#2  0x00007f065dccc873 abort (libc.so.6 + 0x29873)#012#3  0x00005636f4af556f ___interceptor_pthread_create (qemu-img + 0x4e56f)#012#4  0x00007f065af06ff4 _ZN6Thread10try_createEm (libceph-common.so.2 + 0x258ff4)#012#5  0x00007f065af096ae _ZN6Thread6createEPKcm (libceph-common.so.2 + 0x25b6ae)#012#6  0x00007f065be1026b _ZNSt8_Rb_treeISt4pairINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt10type_indexES0_IKS8_N4ceph12immobile_anyILm576EEEESt10_Select1stISD_ENSA_6common11CephContext19associated_objs_cmpESaISD_EE22_M_emplace_hint_uniqueIJRKSt21piecewise_construct_tSt5tupleIJRSt17basic_string_viewIcS4_ERS7_EESP_IJRKSt15in_place_type_tIN6librbd21TaskFinisherSingletonEERPSH_EEEEESt17_Rb_tree_iteratorISD_ESt23_Rb_tree_const_iteratorISD_EDpOT_.constprop.0 (librbd.so.1 + 0x51126b)#012#7  0x00007f065ba3d7a6 _ZN6librbd8ImageCtx4initEv (librbd.so.1 + 0x13e7a6)#012#8  0x00007f065bb172d3 _ZN6librbd5image11OpenRequestINS_8ImageCtxEE12send_refreshEv (librbd.so.1 + 0x2182d3)#012#9  0x00007f065bb17f46 _ZN6librbd5image11OpenRequestINS_8ImageCtxEE23handle_v2_get_data_poolEPi (librbd.so.1 + 0x218f46)#012#10 0x00007f065bb182a7 _ZN6librbd4util6detail20rados_state_callbackINS_5image11OpenRequestINS_8ImageCtxEEEXadL_ZNS6_23handle_v2_get_data_poolEPiEELb1EEEvPvS8_ (librbd.so.1 + 0x2192a7)#012#11 0x00007f065b8160ac _ZN5boost4asio6detail18completion_handlerINS1_7binder0IN8librados14CB_AioCompleteEEENS0_10io_context19basic_executor_typeISaIvELm0EEEE11do_completeEPvPNS1_19scheduler_operationERKNS_6system10error_codeEm (librados.so.2 + 0xad0ac)#012#12 0x00007f065b815585 _ZN5boost4asio6detail14strand_service11do_completeEPvPNS1_19scheduler_operationERKNS_6system10error_codeEm (librados.so.2 + 0xac585)#012#13 0x00007f065b890498 _ZN5boost4asio6detail9scheduler3runERNS_6system10error_codeE.constprop.0.isra.0 (librados.so.2 + 0x127498)#012#14 0x00007f065b82f4e4 _ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZ17make_named_threadIZN4ceph5async15io_context_pool5startEsEUlvE_JEES_St17basic_string_viewIcSt11char_traitsIcEEOT_DpOT0_EUlSD_SG_E_S7_EEEEE6_M_runEv (librados.so.2 + 0xc64e4)#012#15 0x00007f065a59dae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)#012#16 0x00007f065dd2e2fa start_thread (libc.so.6 + 0x8b2fa)#012#17 0x00007f065ddb3540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 757:#012#0  0x00007f065dd2b38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)#012#1  0x00007f065dd2d8e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)#012#2  0x00007f065a5976c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)#012#3  0x00007f065ba44eb3 _ZN6librbd10ImageStateINS_8ImageCtxEE4openEm (librbd.so.1 + 0x145eb3)#012#4  0x00007f065ba14fcb rbd_open (librbd.so.1 + 0x115fcb)#012#5  0x00007f065bfbf89d qemu_rbd_open (block-rbd.so + 0x489d)#012#6  0x00005636f4b05e4c bdrv_open_driver.llvm.6332234179151191066 (qemu-img + 0x5ee4c)#012#7  0x00005636f4b0ab6b bdrv_open_inherit.llvm.6332234179151191066 (qemu-img + 0x63b6b)#012#8  0x00005636f4b175ce bdrv_open_child_bs.llvm.6332234179151191066 (qemu-img + 0x705ce)#012#9  0x00005636f4b0a396 bdrv_open_inherit.llvm.6332234179151191066 (qemu-img + 0x63396)#012#10 0x00005636f4b381f5 blk_new_open (qemu-img + 0x911f5)#012#11 0x00005636f4bf3e16 img_open_file (qemu-img + 0x14ce16)#012#12 0x00005636f4bf39e0 img_open (qemu-img + 0x14c9e0)#012#13 0x00005636f4befc1d img_info (qemu-img + 0x148c1d)#012#14 0x00005636f4be9638 main (qemu-img + 0x142638)#012#15 0x00007f065dccd610 __libc_start_call_main (libc.so.6 + 0x2a610)#012#16 0x00007f065dccd6c0 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x2a6c0)#012#17 0x00005636f4af5215 _start (qemu-img + 0x4e215)#012#012Stack trace of thread 758:#012#0  0x00007f065ddab96d syscall (libc.so.6 + 0x10896d)#012#1  0x00005636f4c74f73 qemu_event_wait (qemu-img + 0x1cdf73)#012#2  0x00005636f4c81f87 call_rcu_thread (qemu-img + 0x1daf87)#012#3  0x00005636f4c752ba qemu_thread_start.llvm.7701297430486814853 (qemu-img + 0x1ce2ba)#012#4  0x00007f065dd2e2fa start_thread (libc.so.6 + 0x8b2fa)#012#5  0x00007f065ddb3540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 761:#012#0  0x00007f065ddb2b7e epoll_wait (libc.so.6 + 0x10fb7e)#012#1  0x00007f065b0ee618 _ZN11EpollDriver10event_waitERSt6vectorI14FiredFileEventSaIS1_EEP7timeval (libceph-common.so.2 + 0x440618)#012#2  0x00007f065b0ec702 _ZN11EventCenter14process_eventsEjPNSt6chrono8durationImSt5ratioILl1ELl1000000000EEEE (libceph-common.so.2 + 0x43e702)#012#3  0x00007f065b0ed2c6 _ZNSt17_Function_handlerIFvvEZN12NetworkStack10add_threadEP6WorkerEUlvE_E9_M_invokeERKSt9_Any_data (libceph-common.so.2 + 0x43f2c6)#012#4  0x00007f065a59dae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)#012#5  0x00007f065dd2e2fa start_thread (libc.so.6 + 0x8b2fa)#012#6  0x00007f065ddb3540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 759:#012#0  0x00007f065dd2b38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)#012#1  0x00007f065dd2d8e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)#012#2  0x00007f065a5976c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)#012#3  0x00007f065b1190a2 _ZN4ceph7logging3Log5entryEv (libceph-common.so.2 + 0x46b0a2)#012#4  0x00007f065dd2e2fa start_thread (libc.so.6 + 0x8b2fa)#012#5  0x00007f065ddb3540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 769:#012#0  0x00007f065dd2b38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)#012#1  0x00007f065dd2dcc0 pthread_cond_clockwait@GLIBC_2.30 (libc.so.6 + 0x8acc0)#012#2  0x00007f065b868364 _ZN4ceph5timerINS_17coarse_mono_clockEE12timer_threadEv (librados.so.2 + 0xff364)#012#3  0x00007f065a59dae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)#012#4  0x00007f065dd2e2fa start_thread (libc.so.6 + 0x8b2fa)#012#5  0x00007f065ddb3540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 771:#012#0  0x00007f065dd2b38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)#012#1  0x00007f065dd2d8e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)#012#2  0x00007f065a5976c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)#012#3  0x00007f065b0150b9 _ZN13DispatchQueue18run_local_deliveryEv (libceph-common.so.2 + 0x3670b9)#012#4  0x00007f065b0a6431 _ZN13DispatchQueue19LocalDeliveryThread5entryEv (libceph-common.so.2 + 0x3f8431)#012#5  0x00007f065dd2e2fa start_thread (libc.so.6 + 0x8b2fa)#012#6  0x00007f065ddb3540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 766:#012#0  0x00007f065dd2b38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)#012#1  0x00007f065dd2dcc0 pthread_cond_clockwait@GLIBC_2.30 (libc.so.6 + 0x8acc0)#012#2  0x00007f065af27150 _ZN4ceph6common24CephContextServiceThread5entryEv (libceph-common.so.2 + 0x279150)#012#3  0x00007f065dd2e2fa start_thread (libc.so.6 + 0x8b2fa)#012#4  0x00007f065ddb3540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 773:#012#0  0x00007f065dd2b38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)#012#1  0x00007f065dd2d8e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)#012#2  0x00007f065a5976c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)#012#3  0x00007f065af0c7f8 _ZN15CommonSafeTimerISt5mutexE12timer_threadEv (libceph-common.so.2 + 0x25e7f8)#012#4  0x00007f065af0cf81 _ZN21CommonSafeTimerThreadISt5mutexE5entryEv (libceph-common.so.2 + 0x25ef81)#012#5  0x00007f065dd2e2fa start_thread (libc.so.6 + 0x8b2fa)#012#6  0x00007f065ddb3540 __clone3 (libc.so.6 + 0x110540)#012#012Stack trace of thread 775:#012#0  0x00007f065dd2b38a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)#012#1  0x00007f065dd2d8e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)#012#2  0x00007f065a5976c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)#012#3  0x00007f065af0c7f8 _ZN15CommonSafeTimerISt5mutexE12ti
Oct 10 23:46:35 np0005480824 systemd[1]: systemd-coredump@0-268326-0.service: Deactivated successfully.
Oct 10 23:46:35 np0005480824 systemd[1]: systemd-coredump@0-268326-0.service: Consumed 1.050s CPU time.
Oct 10 23:46:35 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 23:46:35 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [req-36a44404-8493-459d-9f1c-b6ceb73f4a73 req-8b43deea-0c1d-4751-9db1-cddb590f28fd e164ff95c6c84a77b0287b454f7aa48c 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Unknown error when attempting to find the payload_offset for LUKSv1 encrypted disk rbd:volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775:id=openstack.: nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775:id=openstack : Unexpected error while running command.
Oct 10 23:46:35 np0005480824 nova_compute[260089]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775:id=openstack --force-share --output=json
Oct 10 23:46:35 np0005480824 nova_compute[260089]: Exit code: -6
Oct 10 23:46:35 np0005480824 nova_compute[260089]: Stdout: ''
Oct 10 23:46:35 np0005480824 nova_compute[260089]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Traceback (most recent call last):
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2788, in _resize_attached_encrypted_volume
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575]     info = images.privileged_qemu_img_info(path)
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575]   File "/usr/lib/python3.9/site-packages/nova/virt/images.py", line 57, in privileged_qemu_img_info
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575]     info = nova.privsep.qemu.privileged_qemu_img_info(path, format=format)
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575]   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575]     return self.channel.remote_call(name, args, kwargs,
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575]   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575]     raise exc_type(*result[2])
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775:id=openstack : Unexpected error while running command.
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775:id=openstack --force-share --output=json
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Exit code: -6
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Stdout: ''
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.824 2 ERROR nova.virt.libvirt.driver [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] #033[00m
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.833 2 WARNING nova.compute.manager [req-36a44404-8493-459d-9f1c-b6ceb73f4a73 req-8b43deea-0c1d-4751-9db1-cddb590f28fd e164ff95c6c84a77b0287b454f7aa48c 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Extend volume failed, volume_id=3e0695d3-0aeb-4c90-804b-6098401a9775, reason: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775:id=openstack : Unexpected error while running command.
Oct 10 23:46:35 np0005480824 nova_compute[260089]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775:id=openstack --force-share --output=json
Oct 10 23:46:35 np0005480824 nova_compute[260089]: Exit code: -6
Oct 10 23:46:35 np0005480824 nova_compute[260089]: Stdout: ''
Oct 10 23:46:35 np0005480824 nova_compute[260089]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n': nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775:id=openstack : Unexpected error while running command.#033[00m
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server [req-36a44404-8493-459d-9f1c-b6ceb73f4a73 req-8b43deea-0c1d-4751-9db1-cddb590f28fd e164ff95c6c84a77b0287b454f7aa48c 138b87de781b4b829b248ab2d1714fea - - default default] Exception during message handling: nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775:id=openstack : Unexpected error while running command.
Oct 10 23:46:35 np0005480824 nova_compute[260089]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775:id=openstack --force-share --output=json
Oct 10 23:46:35 np0005480824 nova_compute[260089]: Exit code: -6
Oct 10 23:46:35 np0005480824 nova_compute[260089]: Stdout: ''
Oct 10 23:46:35 np0005480824 nova_compute[260089]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11073, in external_instance_event
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     self.extend_volume(context, instance, event.tag)
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1439, in decorated_function
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10930, in extend_volume
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     self.driver.extend_volume(context, connection_info, instance,
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2865, in extend_volume
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     self._resize_attached_encrypted_volume(
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2804, in _resize_attached_encrypted_volume
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     LOG.exception('Unknown error when attempting to find the '
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2788, in _resize_attached_encrypted_volume
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     info = images.privileged_qemu_img_info(path)
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/images.py", line 57, in privileged_qemu_img_info
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     info = nova.privsep.qemu.privileged_qemu_img_info(path, format=format)
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     return self.channel.remote_call(name, args, kwargs,
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server     raise exc_type(*result[2])
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775:id=openstack : Unexpected error while running command.
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775:id=openstack --force-share --output=json
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server Exit code: -6
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server Stdout: ''
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Oct 10 23:46:35 np0005480824 nova_compute[260089]: 2025-10-11 03:46:35.874 2 ERROR oslo_messaging.rpc.server #033[00m
Oct 10 23:46:36 np0005480824 nova_compute[260089]: 2025-10-11 03:46:36.442 2 DEBUG oslo_concurrency.lockutils [None req-dd2a6324-80f5-413b-9943-047ba0adf6e9 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Acquiring lock "349e8a73-9a19-4cee-89a9-50edc475a575" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:36 np0005480824 nova_compute[260089]: 2025-10-11 03:46:36.443 2 DEBUG oslo_concurrency.lockutils [None req-dd2a6324-80f5-413b-9943-047ba0adf6e9 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:36 np0005480824 nova_compute[260089]: 2025-10-11 03:46:36.461 2 INFO nova.compute.manager [None req-dd2a6324-80f5-413b-9943-047ba0adf6e9 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Detaching volume 3e0695d3-0aeb-4c90-804b-6098401a9775#033[00m
Oct 10 23:46:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 15 KiB/s wr, 37 op/s
Oct 10 23:46:36 np0005480824 nova_compute[260089]: 2025-10-11 03:46:36.691 2 INFO nova.virt.block_device [None req-dd2a6324-80f5-413b-9943-047ba0adf6e9 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Attempting to driver detach volume 3e0695d3-0aeb-4c90-804b-6098401a9775 from mountpoint /dev/vdb#033[00m
Oct 10 23:46:36 np0005480824 nova_compute[260089]: 2025-10-11 03:46:36.829 2 DEBUG os_brick.encryptors [None req-dd2a6324-80f5-413b-9943-047ba0adf6e9 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Using volume encryption metadata '{'encryption_key_id': '25309e87-897e-4c80-8589-6ced7815e1f9', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3e0695d3-0aeb-4c90-804b-6098401a9775', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '349e8a73-9a19-4cee-89a9-50edc475a575', 'attached_at': '', 'detached_at': '', 'volume_id': '3e0695d3-0aeb-4c90-804b-6098401a9775', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct 10 23:46:36 np0005480824 nova_compute[260089]: 2025-10-11 03:46:36.841 2 DEBUG nova.virt.libvirt.driver [None req-dd2a6324-80f5-413b-9943-047ba0adf6e9 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Attempting to detach device vdb from instance 349e8a73-9a19-4cee-89a9-50edc475a575 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 10 23:46:36 np0005480824 nova_compute[260089]: 2025-10-11 03:46:36.842 2 DEBUG nova.virt.libvirt.guest [None req-dd2a6324-80f5-413b-9943-047ba0adf6e9 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775">
Oct 10 23:46:36 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  <serial>3e0695d3-0aeb-4c90-804b-6098401a9775</serial>
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  <encryption format="luks">
Oct 10 23:46:36 np0005480824 nova_compute[260089]:    <secret type="passphrase" uuid="d9078c25-6384-48e5-ab91-a7e62421ec91"/>
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  </encryption>
Oct 10 23:46:36 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:46:36 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:46:36 np0005480824 nova_compute[260089]: 2025-10-11 03:46:36.850 2 INFO nova.virt.libvirt.driver [None req-dd2a6324-80f5-413b-9943-047ba0adf6e9 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Successfully detached device vdb from instance 349e8a73-9a19-4cee-89a9-50edc475a575 from the persistent domain config.#033[00m
Oct 10 23:46:36 np0005480824 nova_compute[260089]: 2025-10-11 03:46:36.851 2 DEBUG nova.virt.libvirt.driver [None req-dd2a6324-80f5-413b-9943-047ba0adf6e9 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 349e8a73-9a19-4cee-89a9-50edc475a575 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 10 23:46:36 np0005480824 nova_compute[260089]: 2025-10-11 03:46:36.852 2 DEBUG nova.virt.libvirt.guest [None req-dd2a6324-80f5-413b-9943-047ba0adf6e9 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-3e0695d3-0aeb-4c90-804b-6098401a9775">
Oct 10 23:46:36 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  <serial>3e0695d3-0aeb-4c90-804b-6098401a9775</serial>
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  <encryption format="luks">
Oct 10 23:46:36 np0005480824 nova_compute[260089]:    <secret type="passphrase" uuid="d9078c25-6384-48e5-ab91-a7e62421ec91"/>
Oct 10 23:46:36 np0005480824 nova_compute[260089]:  </encryption>
Oct 10 23:46:36 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:46:36 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:46:36 np0005480824 nova_compute[260089]: 2025-10-11 03:46:36.982 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Received event <DeviceRemovedEvent: 1760154396.981193, 349e8a73-9a19-4cee-89a9-50edc475a575 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 10 23:46:36 np0005480824 nova_compute[260089]: 2025-10-11 03:46:36.984 2 DEBUG nova.virt.libvirt.driver [None req-dd2a6324-80f5-413b-9943-047ba0adf6e9 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 349e8a73-9a19-4cee-89a9-50edc475a575 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 10 23:46:36 np0005480824 nova_compute[260089]: 2025-10-11 03:46:36.987 2 INFO nova.virt.libvirt.driver [None req-dd2a6324-80f5-413b-9943-047ba0adf6e9 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Successfully detached device vdb from instance 349e8a73-9a19-4cee-89a9-50edc475a575 from the live domain config.#033[00m
Oct 10 23:46:37 np0005480824 nova_compute[260089]: 2025-10-11 03:46:37.152 2 DEBUG nova.objects.instance [None req-dd2a6324-80f5-413b-9943-047ba0adf6e9 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lazy-loading 'flavor' on Instance uuid 349e8a73-9a19-4cee-89a9-50edc475a575 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:46:37 np0005480824 nova_compute[260089]: 2025-10-11 03:46:37.198 2 DEBUG oslo_concurrency.lockutils [None req-dd2a6324-80f5-413b-9943-047ba0adf6e9 f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:37 np0005480824 nova_compute[260089]: 2025-10-11 03:46:37.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct 10 23:46:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct 10 23:46:37 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct 10 23:46:37 np0005480824 nova_compute[260089]: 2025-10-11 03:46:37.879 2 DEBUG oslo_concurrency.lockutils [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Acquiring lock "349e8a73-9a19-4cee-89a9-50edc475a575" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:37 np0005480824 nova_compute[260089]: 2025-10-11 03:46:37.880 2 DEBUG oslo_concurrency.lockutils [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:37 np0005480824 nova_compute[260089]: 2025-10-11 03:46:37.880 2 DEBUG oslo_concurrency.lockutils [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Acquiring lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:37 np0005480824 nova_compute[260089]: 2025-10-11 03:46:37.881 2 DEBUG oslo_concurrency.lockutils [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:37 np0005480824 nova_compute[260089]: 2025-10-11 03:46:37.881 2 DEBUG oslo_concurrency.lockutils [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:37 np0005480824 nova_compute[260089]: 2025-10-11 03:46:37.884 2 INFO nova.compute.manager [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Terminating instance#033[00m
Oct 10 23:46:37 np0005480824 nova_compute[260089]: 2025-10-11 03:46:37.886 2 DEBUG nova.compute.manager [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007595910049163248 of space, bias 1.0, pg target 0.22787730147489746 quantized to 32 (current 32)
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.387758839617113e-06 of space, bias 1.0, pg target 0.0013163276518851337 quantized to 32 (current 32)
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:46:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:46:37 np0005480824 kernel: tapae972b5d-42 (unregistering): left promiscuous mode
Oct 10 23:46:37 np0005480824 NetworkManager[44969]: <info>  [1760154397.9711] device (tapae972b5d-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:46:38 np0005480824 ovn_controller[152667]: 2025-10-11T03:46:38Z|00032|binding|INFO|Releasing lport ae972b5d-4250-48ac-9b8a-e35678042b82 from this chassis (sb_readonly=0)
Oct 10 23:46:38 np0005480824 ovn_controller[152667]: 2025-10-11T03:46:38Z|00033|binding|INFO|Setting lport ae972b5d-4250-48ac-9b8a-e35678042b82 down in Southbound
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:38 np0005480824 ovn_controller[152667]: 2025-10-11T03:46:38Z|00034|binding|INFO|Removing iface tapae972b5d-42 ovn-installed in OVS
Oct 10 23:46:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:38.048 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:49:57 10.100.0.12'], port_security=['fa:16:3e:68:49:57 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '349e8a73-9a19-4cee-89a9-50edc475a575', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa909145-5687-40b4-825d-ce6ac3b98885', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d7871f9f8a74d2d85dc275b42df9042', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ba88d9d2-d3d6-4603-8fd8-e6dc5b5939b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78ee477e-a857-41e0-9753-6e712d707687, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=ae972b5d-4250-48ac-9b8a-e35678042b82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:46:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:38.050 162245 INFO neutron.agent.ovn.metadata.agent [-] Port ae972b5d-4250-48ac-9b8a-e35678042b82 in datapath fa909145-5687-40b4-825d-ce6ac3b98885 unbound from our chassis#033[00m
Oct 10 23:46:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:38.051 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fa909145-5687-40b4-825d-ce6ac3b98885, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:46:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:38.052 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[add75a9a-600a-49ba-a14c-10c6a9c3d687]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:38.052 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885 namespace which is not needed anymore#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:38 np0005480824 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct 10 23:46:38 np0005480824 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 18.745s CPU time.
Oct 10 23:46:38 np0005480824 systemd-machined[215071]: Machine qemu-1-instance-00000001 terminated.
Oct 10 23:46:38 np0005480824 neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885[268097]: [NOTICE]   (268101) : haproxy version is 2.8.14-c23fe91
Oct 10 23:46:38 np0005480824 neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885[268097]: [NOTICE]   (268101) : path to executable is /usr/sbin/haproxy
Oct 10 23:46:38 np0005480824 neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885[268097]: [WARNING]  (268101) : Exiting Master process...
Oct 10 23:46:38 np0005480824 neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885[268097]: [ALERT]    (268101) : Current worker (268103) exited with code 143 (Terminated)
Oct 10 23:46:38 np0005480824 neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885[268097]: [WARNING]  (268101) : All workers exited. Exiting... (0)
Oct 10 23:46:38 np0005480824 systemd[1]: libpod-605b9e124ec78e77c1d466f102fa58549f8d1da449e57f832ba62d4eb0c49257.scope: Deactivated successfully.
Oct 10 23:46:38 np0005480824 podman[268360]: 2025-10-11 03:46:38.239203696 +0000 UTC m=+0.064894751 container died 605b9e124ec78e77c1d466f102fa58549f8d1da449e57f832ba62d4eb0c49257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.256 2 DEBUG nova.compute.manager [req-793fc05e-c9ae-4667-be2f-0ba02dece5f8 req-840f5b5f-223f-4da1-b251-98b111041b95 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Received event network-vif-unplugged-ae972b5d-4250-48ac-9b8a-e35678042b82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.257 2 DEBUG oslo_concurrency.lockutils [req-793fc05e-c9ae-4667-be2f-0ba02dece5f8 req-840f5b5f-223f-4da1-b251-98b111041b95 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.257 2 DEBUG oslo_concurrency.lockutils [req-793fc05e-c9ae-4667-be2f-0ba02dece5f8 req-840f5b5f-223f-4da1-b251-98b111041b95 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.258 2 DEBUG oslo_concurrency.lockutils [req-793fc05e-c9ae-4667-be2f-0ba02dece5f8 req-840f5b5f-223f-4da1-b251-98b111041b95 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.258 2 DEBUG nova.compute.manager [req-793fc05e-c9ae-4667-be2f-0ba02dece5f8 req-840f5b5f-223f-4da1-b251-98b111041b95 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] No waiting events found dispatching network-vif-unplugged-ae972b5d-4250-48ac-9b8a-e35678042b82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.259 2 DEBUG nova.compute.manager [req-793fc05e-c9ae-4667-be2f-0ba02dece5f8 req-840f5b5f-223f-4da1-b251-98b111041b95 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Received event network-vif-unplugged-ae972b5d-4250-48ac-9b8a-e35678042b82 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:46:38 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-605b9e124ec78e77c1d466f102fa58549f8d1da449e57f832ba62d4eb0c49257-userdata-shm.mount: Deactivated successfully.
Oct 10 23:46:38 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e41fed3190f4d4e6884ed908620b281eff4bff5d70fbca71ca1b4cc825940080-merged.mount: Deactivated successfully.
Oct 10 23:46:38 np0005480824 podman[268360]: 2025-10-11 03:46:38.292739329 +0000 UTC m=+0.118430394 container cleanup 605b9e124ec78e77c1d466f102fa58549f8d1da449e57f832ba62d4eb0c49257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:46:38 np0005480824 systemd[1]: libpod-conmon-605b9e124ec78e77c1d466f102fa58549f8d1da449e57f832ba62d4eb0c49257.scope: Deactivated successfully.
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.326 2 INFO nova.virt.libvirt.driver [-] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Instance destroyed successfully.#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.326 2 DEBUG nova.objects.instance [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lazy-loading 'resources' on Instance uuid 349e8a73-9a19-4cee-89a9-50edc475a575 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.342 2 DEBUG nova.virt.libvirt.vif [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:45:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1257432866',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1257432866',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1257432866',id=1,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHivZjf50uTfe5OpJKlfHIdAW/c2BXCSbX4ofRlmzc2XEUmfO1Yv5L4WVqCHAsUloIewIBQZTDtfV+tyWSUENKvFw3Qn/LrNIp96ukCD3zVN0Jq7cm4IoZlNQxUHhrCQcA==',key_name='tempest-keypair-606015105',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:46:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d7871f9f8a74d2d85dc275b42df9042',ramdisk_id='',reservation_id='r-02rulfbj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-1619059606',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-1619059606-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:46:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f7e60061152c4dbb80545545c356cabc',uuid=349e8a73-9a19-4cee-89a9-50edc475a575,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ae972b5d-4250-48ac-9b8a-e35678042b82", "address": "fa:16:3e:68:49:57", "network": {"id": "fa909145-5687-40b4-825d-ce6ac3b98885", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-311959019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d7871f9f8a74d2d85dc275b42df9042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae972b5d-42", "ovs_interfaceid": "ae972b5d-4250-48ac-9b8a-e35678042b82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.343 2 DEBUG nova.network.os_vif_util [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Converting VIF {"id": "ae972b5d-4250-48ac-9b8a-e35678042b82", "address": "fa:16:3e:68:49:57", "network": {"id": "fa909145-5687-40b4-825d-ce6ac3b98885", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-311959019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6d7871f9f8a74d2d85dc275b42df9042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae972b5d-42", "ovs_interfaceid": "ae972b5d-4250-48ac-9b8a-e35678042b82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.344 2 DEBUG nova.network.os_vif_util [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:49:57,bridge_name='br-int',has_traffic_filtering=True,id=ae972b5d-4250-48ac-9b8a-e35678042b82,network=Network(fa909145-5687-40b4-825d-ce6ac3b98885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae972b5d-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.344 2 DEBUG os_vif [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:49:57,bridge_name='br-int',has_traffic_filtering=True,id=ae972b5d-4250-48ac-9b8a-e35678042b82,network=Network(fa909145-5687-40b4-825d-ce6ac3b98885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae972b5d-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae972b5d-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.356 2 INFO os_vif [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:49:57,bridge_name='br-int',has_traffic_filtering=True,id=ae972b5d-4250-48ac-9b8a-e35678042b82,network=Network(fa909145-5687-40b4-825d-ce6ac3b98885),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae972b5d-42')#033[00m
Oct 10 23:46:38 np0005480824 podman[268391]: 2025-10-11 03:46:38.375397498 +0000 UTC m=+0.052709873 container remove 605b9e124ec78e77c1d466f102fa58549f8d1da449e57f832ba62d4eb0c49257 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:46:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:38.385 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4d504b-704f-49f9-a69b-1365b2a935fa]: (4, ('Sat Oct 11 03:46:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885 (605b9e124ec78e77c1d466f102fa58549f8d1da449e57f832ba62d4eb0c49257)\n605b9e124ec78e77c1d466f102fa58549f8d1da449e57f832ba62d4eb0c49257\nSat Oct 11 03:46:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885 (605b9e124ec78e77c1d466f102fa58549f8d1da449e57f832ba62d4eb0c49257)\n605b9e124ec78e77c1d466f102fa58549f8d1da449e57f832ba62d4eb0c49257\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:38.388 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[cf262e7e-1e1f-473b-bc9e-7030e74b2a8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:38.388 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa909145-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:38 np0005480824 kernel: tapfa909145-50: left promiscuous mode
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:38.414 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2f20428f-a362-4e9c-9f27-b993def869f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:38.435 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[cbcb0cd3-04fc-41a9-a703-03ef1c34dad1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:38.437 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[fe065c6e-a96b-43ad-b454-073cd62d8755]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:38.460 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce5e26c-31b9-45bd-aa2c-f7e312c1e889]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383608, 'reachable_time': 19587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268430, 'error': None, 'target': 'ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:38 np0005480824 systemd[1]: run-netns-ovnmeta\x2dfa909145\x2d5687\x2d40b4\x2d825d\x2dce6ac3b98885.mount: Deactivated successfully.
Oct 10 23:46:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:38.478 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fa909145-5687-40b4-825d-ce6ac3b98885 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:46:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:46:38.479 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[a10f84f8-f8bf-48b7-966a-57a285d73d51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:46:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 2.6 KiB/s wr, 80 op/s
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.817 2 INFO nova.virt.libvirt.driver [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Deleting instance files /var/lib/nova/instances/349e8a73-9a19-4cee-89a9-50edc475a575_del#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.819 2 INFO nova.virt.libvirt.driver [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Deletion of /var/lib/nova/instances/349e8a73-9a19-4cee-89a9-50edc475a575_del complete#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.881 2 DEBUG nova.virt.libvirt.host [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.881 2 INFO nova.virt.libvirt.host [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] UEFI support detected#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.885 2 INFO nova.compute.manager [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.886 2 DEBUG oslo.service.loopingcall [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.886 2 DEBUG nova.compute.manager [-] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:46:38 np0005480824 nova_compute[260089]: 2025-10-11 03:46:38.886 2 DEBUG nova.network.neutron [-] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:46:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:46:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct 10 23:46:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct 10 23:46:39 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct 10 23:46:40 np0005480824 nova_compute[260089]: 2025-10-11 03:46:40.374 2 DEBUG nova.compute.manager [req-7ddba2b0-cdc0-43f7-8dd1-8f6ded18f3fd req-54fe7d46-d117-47b8-baef-7a7e139e96b2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Received event network-vif-plugged-ae972b5d-4250-48ac-9b8a-e35678042b82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:46:40 np0005480824 nova_compute[260089]: 2025-10-11 03:46:40.374 2 DEBUG oslo_concurrency.lockutils [req-7ddba2b0-cdc0-43f7-8dd1-8f6ded18f3fd req-54fe7d46-d117-47b8-baef-7a7e139e96b2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:40 np0005480824 nova_compute[260089]: 2025-10-11 03:46:40.375 2 DEBUG oslo_concurrency.lockutils [req-7ddba2b0-cdc0-43f7-8dd1-8f6ded18f3fd req-54fe7d46-d117-47b8-baef-7a7e139e96b2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:40 np0005480824 nova_compute[260089]: 2025-10-11 03:46:40.375 2 DEBUG oslo_concurrency.lockutils [req-7ddba2b0-cdc0-43f7-8dd1-8f6ded18f3fd req-54fe7d46-d117-47b8-baef-7a7e139e96b2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:40 np0005480824 nova_compute[260089]: 2025-10-11 03:46:40.375 2 DEBUG nova.compute.manager [req-7ddba2b0-cdc0-43f7-8dd1-8f6ded18f3fd req-54fe7d46-d117-47b8-baef-7a7e139e96b2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] No waiting events found dispatching network-vif-plugged-ae972b5d-4250-48ac-9b8a-e35678042b82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:46:40 np0005480824 nova_compute[260089]: 2025-10-11 03:46:40.375 2 WARNING nova.compute.manager [req-7ddba2b0-cdc0-43f7-8dd1-8f6ded18f3fd req-54fe7d46-d117-47b8-baef-7a7e139e96b2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Received unexpected event network-vif-plugged-ae972b5d-4250-48ac-9b8a-e35678042b82 for instance with vm_state active and task_state deleting.#033[00m
Oct 10 23:46:40 np0005480824 nova_compute[260089]: 2025-10-11 03:46:40.503 2 DEBUG nova.network.neutron [-] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:46:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.7 KiB/s wr, 66 op/s
Oct 10 23:46:40 np0005480824 nova_compute[260089]: 2025-10-11 03:46:40.527 2 INFO nova.compute.manager [-] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Took 1.64 seconds to deallocate network for instance.#033[00m
Oct 10 23:46:40 np0005480824 nova_compute[260089]: 2025-10-11 03:46:40.579 2 DEBUG oslo_concurrency.lockutils [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:46:40 np0005480824 nova_compute[260089]: 2025-10-11 03:46:40.580 2 DEBUG oslo_concurrency.lockutils [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:46:40 np0005480824 nova_compute[260089]: 2025-10-11 03:46:40.625 2 DEBUG nova.compute.manager [req-b80e82c8-3516-4837-8e16-63dd9cdea85c req-9d0ae17e-1900-4219-8cf6-2f5371f609ab 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Received event network-vif-deleted-ae972b5d-4250-48ac-9b8a-e35678042b82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:46:40 np0005480824 nova_compute[260089]: 2025-10-11 03:46:40.642 2 DEBUG oslo_concurrency.processutils [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:46:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:46:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2383087254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:46:41 np0005480824 nova_compute[260089]: 2025-10-11 03:46:41.177 2 DEBUG oslo_concurrency.processutils [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:46:41 np0005480824 nova_compute[260089]: 2025-10-11 03:46:41.187 2 DEBUG nova.compute.provider_tree [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:46:41 np0005480824 nova_compute[260089]: 2025-10-11 03:46:41.202 2 DEBUG nova.scheduler.client.report [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:46:41 np0005480824 nova_compute[260089]: 2025-10-11 03:46:41.231 2 DEBUG oslo_concurrency.lockutils [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:41 np0005480824 nova_compute[260089]: 2025-10-11 03:46:41.271 2 INFO nova.scheduler.client.report [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Deleted allocations for instance 349e8a73-9a19-4cee-89a9-50edc475a575#033[00m
Oct 10 23:46:41 np0005480824 nova_compute[260089]: 2025-10-11 03:46:41.361 2 DEBUG oslo_concurrency.lockutils [None req-e90bb36b-6985-4f26-8331-eea4e31dd48f f7e60061152c4dbb80545545c356cabc 6d7871f9f8a74d2d85dc275b42df9042 - - default default] Lock "349e8a73-9a19-4cee-89a9-50edc475a575" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:46:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 10 23:46:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 10 23:46:41 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 10 23:46:41 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 10 23:46:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 73 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 4.2 KiB/s wr, 119 op/s
Oct 10 23:46:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 10 23:46:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 10 23:46:42 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 10 23:46:43 np0005480824 nova_compute[260089]: 2025-10-11 03:46:43.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:43 np0005480824 nova_compute[260089]: 2025-10-11 03:46:43.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:46:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 42 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 4.7 KiB/s wr, 106 op/s
Oct 10 23:46:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 10 23:46:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 10 23:46:44 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 10 23:46:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1822157601' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1822157601' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:46 np0005480824 podman[268457]: 2025-10-11 03:46:46.065953219 +0000 UTC m=+0.108306475 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 23:46:46 np0005480824 podman[268456]: 2025-10-11 03:46:46.069202746 +0000 UTC m=+0.109273689 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 10 23:46:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 42 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 4.7 KiB/s wr, 106 op/s
Oct 10 23:46:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct 10 23:46:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct 10 23:46:46 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct 10 23:46:48 np0005480824 nova_compute[260089]: 2025-10-11 03:46:48.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:48 np0005480824 nova_compute[260089]: 2025-10-11 03:46:48.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 5.7 KiB/s wr, 125 op/s
Oct 10 23:46:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct 10 23:46:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct 10 23:46:48 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct 10 23:46:49 np0005480824 nova_compute[260089]: 2025-10-11 03:46:49.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:49 np0005480824 nova_compute[260089]: 2025-10-11 03:46:49.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:46:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct 10 23:46:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct 10 23:46:49 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct 10 23:46:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/6351375' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/6351375' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct 10 23:46:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct 10 23:46:50 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct 10 23:46:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 5.2 KiB/s wr, 112 op/s
Oct 10 23:46:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct 10 23:46:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct 10 23:46:52 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct 10 23:46:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 1023 B/s wr, 9 op/s
Oct 10 23:46:53 np0005480824 nova_compute[260089]: 2025-10-11 03:46:53.327 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154398.3257017, 349e8a73-9a19-4cee-89a9-50edc475a575 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:46:53 np0005480824 nova_compute[260089]: 2025-10-11 03:46:53.327 2 INFO nova.compute.manager [-] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:46:53 np0005480824 nova_compute[260089]: 2025-10-11 03:46:53.349 2 DEBUG nova.compute.manager [None req-727650bb-e1aa-4cff-a38a-ff5a137620a8 - - - - - -] [instance: 349e8a73-9a19-4cee-89a9-50edc475a575] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:46:53 np0005480824 nova_compute[260089]: 2025-10-11 03:46:53.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:53 np0005480824 nova_compute[260089]: 2025-10-11 03:46:53.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:54 np0005480824 podman[268494]: 2025-10-11 03:46:54.094029121 +0000 UTC m=+0.144680984 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 10 23:46:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:46:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 4.0 KiB/s wr, 88 op/s
Oct 10 23:46:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct 10 23:46:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct 10 23:46:55 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct 10 23:46:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2147677138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2147677138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 3.8 KiB/s wr, 83 op/s
Oct 10 23:46:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:46:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:46:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:46:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:46:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:46:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:46:58 np0005480824 nova_compute[260089]: 2025-10-11 03:46:58.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:58 np0005480824 nova_compute[260089]: 2025-10-11 03:46:58.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:46:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 4.5 KiB/s wr, 99 op/s
Oct 10 23:46:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:46:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct 10 23:46:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct 10 23:46:59 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct 10 23:46:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:46:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1339965829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:46:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:46:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1339965829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:46:59 np0005480824 podman[268520]: 2025-10-11 03:46:59.993047264 +0000 UTC m=+0.055045749 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 10 23:47:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 4.0 KiB/s wr, 94 op/s
Oct 10 23:47:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.2 KiB/s wr, 42 op/s
Oct 10 23:47:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1256100187' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1256100187' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:03 np0005480824 nova_compute[260089]: 2025-10-11 03:47:03.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:03 np0005480824 nova_compute[260089]: 2025-10-11 03:47:03.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:47:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.2 KiB/s wr, 51 op/s
Oct 10 23:47:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct 10 23:47:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct 10 23:47:05 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct 10 23:47:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 895 B/s wr, 21 op/s
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:47:07 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 47d46028-adff-4622-90d8-9a5f6d28b035 does not exist
Oct 10 23:47:07 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev b1507b0b-a69b-40a9-8f06-b15edccc2829 does not exist
Oct 10 23:47:07 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev a7b068de-c7df-49bc-a700-c8cd1aa6de1f does not exist
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2947662241' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2947662241' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:07 np0005480824 podman[268811]: 2025-10-11 03:47:07.962232377 +0000 UTC m=+0.039432242 container create 82144c443bf2f21189b0c2865af0e21e411cc89632ef13f174393cceb3854613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:47:07 np0005480824 systemd[1]: Started libpod-conmon-82144c443bf2f21189b0c2865af0e21e411cc89632ef13f174393cceb3854613.scope.
Oct 10 23:47:08 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:47:08 np0005480824 podman[268811]: 2025-10-11 03:47:08.034266056 +0000 UTC m=+0.111465931 container init 82144c443bf2f21189b0c2865af0e21e411cc89632ef13f174393cceb3854613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:47:08 np0005480824 podman[268811]: 2025-10-11 03:47:07.943965987 +0000 UTC m=+0.021165902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:47:08 np0005480824 podman[268811]: 2025-10-11 03:47:08.039838727 +0000 UTC m=+0.117038602 container start 82144c443bf2f21189b0c2865af0e21e411cc89632ef13f174393cceb3854613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:47:08 np0005480824 podman[268811]: 2025-10-11 03:47:08.042881219 +0000 UTC m=+0.120081104 container attach 82144c443bf2f21189b0c2865af0e21e411cc89632ef13f174393cceb3854613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:47:08 np0005480824 amazing_vaughan[268827]: 167 167
Oct 10 23:47:08 np0005480824 systemd[1]: libpod-82144c443bf2f21189b0c2865af0e21e411cc89632ef13f174393cceb3854613.scope: Deactivated successfully.
Oct 10 23:47:08 np0005480824 conmon[268827]: conmon 82144c443bf2f21189b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82144c443bf2f21189b0c2865af0e21e411cc89632ef13f174393cceb3854613.scope/container/memory.events
Oct 10 23:47:08 np0005480824 podman[268811]: 2025-10-11 03:47:08.046946915 +0000 UTC m=+0.124146780 container died 82144c443bf2f21189b0c2865af0e21e411cc89632ef13f174393cceb3854613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_vaughan, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:47:08 np0005480824 systemd[1]: var-lib-containers-storage-overlay-f8dc7f68d63d58a26afef5006c38b0db5a6be1d0318a71d9b7cccd8f7737128d-merged.mount: Deactivated successfully.
Oct 10 23:47:08 np0005480824 podman[268811]: 2025-10-11 03:47:08.080942547 +0000 UTC m=+0.158142412 container remove 82144c443bf2f21189b0c2865af0e21e411cc89632ef13f174393cceb3854613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_vaughan, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:47:08 np0005480824 systemd[1]: libpod-conmon-82144c443bf2f21189b0c2865af0e21e411cc89632ef13f174393cceb3854613.scope: Deactivated successfully.
Oct 10 23:47:08 np0005480824 podman[268850]: 2025-10-11 03:47:08.241977495 +0000 UTC m=+0.040917436 container create b3c3b8fbd7318c72f331f70f0a2e1e69e68a53afc6e051ccafd980b94c409202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cerf, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:47:08 np0005480824 systemd[1]: Started libpod-conmon-b3c3b8fbd7318c72f331f70f0a2e1e69e68a53afc6e051ccafd980b94c409202.scope.
Oct 10 23:47:08 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:47:08 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/753baeb67d36741bd71eb299cbc9aceb5d18e429ff3a6d0fa70b76cd7cfd727b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:47:08 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/753baeb67d36741bd71eb299cbc9aceb5d18e429ff3a6d0fa70b76cd7cfd727b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:47:08 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/753baeb67d36741bd71eb299cbc9aceb5d18e429ff3a6d0fa70b76cd7cfd727b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:47:08 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/753baeb67d36741bd71eb299cbc9aceb5d18e429ff3a6d0fa70b76cd7cfd727b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:47:08 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/753baeb67d36741bd71eb299cbc9aceb5d18e429ff3a6d0fa70b76cd7cfd727b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:47:08 np0005480824 podman[268850]: 2025-10-11 03:47:08.225672751 +0000 UTC m=+0.024612732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:47:08 np0005480824 podman[268850]: 2025-10-11 03:47:08.32400053 +0000 UTC m=+0.122940481 container init b3c3b8fbd7318c72f331f70f0a2e1e69e68a53afc6e051ccafd980b94c409202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 10 23:47:08 np0005480824 podman[268850]: 2025-10-11 03:47:08.333979505 +0000 UTC m=+0.132919446 container start b3c3b8fbd7318c72f331f70f0a2e1e69e68a53afc6e051ccafd980b94c409202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 23:47:08 np0005480824 podman[268850]: 2025-10-11 03:47:08.337411647 +0000 UTC m=+0.136351608 container attach b3c3b8fbd7318c72f331f70f0a2e1e69e68a53afc6e051ccafd980b94c409202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:47:08 np0005480824 nova_compute[260089]: 2025-10-11 03:47:08.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:08 np0005480824 nova_compute[260089]: 2025-10-11 03:47:08.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 3.2 KiB/s wr, 73 op/s
Oct 10 23:47:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1393991732' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1393991732' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:09 np0005480824 clever_cerf[268867]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:47:09 np0005480824 clever_cerf[268867]: --> relative data size: 1.0
Oct 10 23:47:09 np0005480824 clever_cerf[268867]: --> All data devices are unavailable
Oct 10 23:47:09 np0005480824 systemd[1]: libpod-b3c3b8fbd7318c72f331f70f0a2e1e69e68a53afc6e051ccafd980b94c409202.scope: Deactivated successfully.
Oct 10 23:47:09 np0005480824 systemd[1]: libpod-b3c3b8fbd7318c72f331f70f0a2e1e69e68a53afc6e051ccafd980b94c409202.scope: Consumed 1.058s CPU time.
Oct 10 23:47:09 np0005480824 conmon[268867]: conmon b3c3b8fbd7318c72f331 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3c3b8fbd7318c72f331f70f0a2e1e69e68a53afc6e051ccafd980b94c409202.scope/container/memory.events
Oct 10 23:47:09 np0005480824 podman[268850]: 2025-10-11 03:47:09.457363433 +0000 UTC m=+1.256303384 container died b3c3b8fbd7318c72f331f70f0a2e1e69e68a53afc6e051ccafd980b94c409202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cerf, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 23:47:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:47:09 np0005480824 systemd[1]: var-lib-containers-storage-overlay-753baeb67d36741bd71eb299cbc9aceb5d18e429ff3a6d0fa70b76cd7cfd727b-merged.mount: Deactivated successfully.
Oct 10 23:47:09 np0005480824 podman[268850]: 2025-10-11 03:47:09.509308479 +0000 UTC m=+1.308248420 container remove b3c3b8fbd7318c72f331f70f0a2e1e69e68a53afc6e051ccafd980b94c409202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:47:09 np0005480824 systemd[1]: libpod-conmon-b3c3b8fbd7318c72f331f70f0a2e1e69e68a53afc6e051ccafd980b94c409202.scope: Deactivated successfully.
Oct 10 23:47:10 np0005480824 podman[269052]: 2025-10-11 03:47:10.119788469 +0000 UTC m=+0.049445698 container create 319d93180de4b0eba95d938412fd7f419dac6dff8a10a7dcdfe2e7c7af5c07c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ride, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 10 23:47:10 np0005480824 systemd[1]: Started libpod-conmon-319d93180de4b0eba95d938412fd7f419dac6dff8a10a7dcdfe2e7c7af5c07c3.scope.
Oct 10 23:47:10 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:47:10 np0005480824 podman[269052]: 2025-10-11 03:47:10.185868797 +0000 UTC m=+0.115525986 container init 319d93180de4b0eba95d938412fd7f419dac6dff8a10a7dcdfe2e7c7af5c07c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ride, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:47:10 np0005480824 podman[269052]: 2025-10-11 03:47:10.09358646 +0000 UTC m=+0.023243739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:47:10 np0005480824 podman[269052]: 2025-10-11 03:47:10.191478929 +0000 UTC m=+0.121136118 container start 319d93180de4b0eba95d938412fd7f419dac6dff8a10a7dcdfe2e7c7af5c07c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ride, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:47:10 np0005480824 podman[269052]: 2025-10-11 03:47:10.194674245 +0000 UTC m=+0.124331424 container attach 319d93180de4b0eba95d938412fd7f419dac6dff8a10a7dcdfe2e7c7af5c07c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:47:10 np0005480824 compassionate_ride[269068]: 167 167
Oct 10 23:47:10 np0005480824 systemd[1]: libpod-319d93180de4b0eba95d938412fd7f419dac6dff8a10a7dcdfe2e7c7af5c07c3.scope: Deactivated successfully.
Oct 10 23:47:10 np0005480824 conmon[269068]: conmon 319d93180de4b0eba95d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-319d93180de4b0eba95d938412fd7f419dac6dff8a10a7dcdfe2e7c7af5c07c3.scope/container/memory.events
Oct 10 23:47:10 np0005480824 podman[269052]: 2025-10-11 03:47:10.196807115 +0000 UTC m=+0.126464294 container died 319d93180de4b0eba95d938412fd7f419dac6dff8a10a7dcdfe2e7c7af5c07c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ride, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:47:10 np0005480824 systemd[1]: var-lib-containers-storage-overlay-cd19d1168a17edce5e994da279cf6a2856615fdacc1ebc8a8aff6c02f5b85417-merged.mount: Deactivated successfully.
Oct 10 23:47:10 np0005480824 podman[269052]: 2025-10-11 03:47:10.23813784 +0000 UTC m=+0.167795019 container remove 319d93180de4b0eba95d938412fd7f419dac6dff8a10a7dcdfe2e7c7af5c07c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ride, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 23:47:10 np0005480824 systemd[1]: libpod-conmon-319d93180de4b0eba95d938412fd7f419dac6dff8a10a7dcdfe2e7c7af5c07c3.scope: Deactivated successfully.
Oct 10 23:47:10 np0005480824 podman[269091]: 2025-10-11 03:47:10.400720225 +0000 UTC m=+0.040286972 container create af942bdbc21292d03977b2b1b5ad6178e89dd6b251fcdb899d77a300dc33838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poincare, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 23:47:10 np0005480824 systemd[1]: Started libpod-conmon-af942bdbc21292d03977b2b1b5ad6178e89dd6b251fcdb899d77a300dc33838c.scope.
Oct 10 23:47:10 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:47:10 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb9b78a5a2a7071aa47fd2b7e5443b41d8ce479f041034769907aaeb7115e3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:47:10 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb9b78a5a2a7071aa47fd2b7e5443b41d8ce479f041034769907aaeb7115e3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:47:10 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb9b78a5a2a7071aa47fd2b7e5443b41d8ce479f041034769907aaeb7115e3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:47:10 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb9b78a5a2a7071aa47fd2b7e5443b41d8ce479f041034769907aaeb7115e3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:47:10 np0005480824 podman[269091]: 2025-10-11 03:47:10.475068568 +0000 UTC m=+0.114635305 container init af942bdbc21292d03977b2b1b5ad6178e89dd6b251fcdb899d77a300dc33838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poincare, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 10 23:47:10 np0005480824 podman[269091]: 2025-10-11 03:47:10.381572803 +0000 UTC m=+0.021139540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:47:10 np0005480824 podman[269091]: 2025-10-11 03:47:10.488227589 +0000 UTC m=+0.127794306 container start af942bdbc21292d03977b2b1b5ad6178e89dd6b251fcdb899d77a300dc33838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:47:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:47:10.486 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:47:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:47:10.488 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:47:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:47:10.488 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:47:10 np0005480824 podman[269091]: 2025-10-11 03:47:10.491545557 +0000 UTC m=+0.131112294 container attach af942bdbc21292d03977b2b1b5ad6178e89dd6b251fcdb899d77a300dc33838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 10 23:47:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.9 KiB/s wr, 66 op/s
Oct 10 23:47:11 np0005480824 festive_poincare[269108]: {
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:    "0": [
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:        {
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "devices": [
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "/dev/loop3"
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            ],
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_name": "ceph_lv0",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_size": "21470642176",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "name": "ceph_lv0",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "tags": {
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.cluster_name": "ceph",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.crush_device_class": "",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.encrypted": "0",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.osd_id": "0",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.type": "block",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.vdo": "0"
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            },
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "type": "block",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "vg_name": "ceph_vg0"
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:        }
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:    ],
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:    "1": [
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:        {
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "devices": [
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "/dev/loop4"
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            ],
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_name": "ceph_lv1",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_size": "21470642176",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "name": "ceph_lv1",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "tags": {
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.cluster_name": "ceph",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.crush_device_class": "",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.encrypted": "0",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.osd_id": "1",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.type": "block",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.vdo": "0"
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            },
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "type": "block",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "vg_name": "ceph_vg1"
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:        }
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:    ],
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:    "2": [
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:        {
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "devices": [
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "/dev/loop5"
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            ],
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_name": "ceph_lv2",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_size": "21470642176",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "name": "ceph_lv2",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "tags": {
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.cluster_name": "ceph",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.crush_device_class": "",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.encrypted": "0",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.osd_id": "2",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.type": "block",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:                "ceph.vdo": "0"
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            },
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "type": "block",
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:            "vg_name": "ceph_vg2"
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:        }
Oct 10 23:47:11 np0005480824 festive_poincare[269108]:    ]
Oct 10 23:47:11 np0005480824 festive_poincare[269108]: }
Oct 10 23:47:11 np0005480824 systemd[1]: libpod-af942bdbc21292d03977b2b1b5ad6178e89dd6b251fcdb899d77a300dc33838c.scope: Deactivated successfully.
Oct 10 23:47:11 np0005480824 podman[269091]: 2025-10-11 03:47:11.217178973 +0000 UTC m=+0.856745730 container died af942bdbc21292d03977b2b1b5ad6178e89dd6b251fcdb899d77a300dc33838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 23:47:11 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ebb9b78a5a2a7071aa47fd2b7e5443b41d8ce479f041034769907aaeb7115e3b-merged.mount: Deactivated successfully.
Oct 10 23:47:11 np0005480824 podman[269091]: 2025-10-11 03:47:11.279266898 +0000 UTC m=+0.918833615 container remove af942bdbc21292d03977b2b1b5ad6178e89dd6b251fcdb899d77a300dc33838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 23:47:11 np0005480824 systemd[1]: libpod-conmon-af942bdbc21292d03977b2b1b5ad6178e89dd6b251fcdb899d77a300dc33838c.scope: Deactivated successfully.
Oct 10 23:47:11 np0005480824 podman[269274]: 2025-10-11 03:47:11.934487223 +0000 UTC m=+0.040301292 container create b4270abf6033cf0ea4fb9103dede2eb0c20f3884bf0a031835aac53a65fff197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 10 23:47:11 np0005480824 systemd[1]: Started libpod-conmon-b4270abf6033cf0ea4fb9103dede2eb0c20f3884bf0a031835aac53a65fff197.scope.
Oct 10 23:47:11 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:47:12 np0005480824 podman[269274]: 2025-10-11 03:47:12.002929606 +0000 UTC m=+0.108743695 container init b4270abf6033cf0ea4fb9103dede2eb0c20f3884bf0a031835aac53a65fff197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:47:12 np0005480824 podman[269274]: 2025-10-11 03:47:11.914680935 +0000 UTC m=+0.020495044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:47:12 np0005480824 podman[269274]: 2025-10-11 03:47:12.012215076 +0000 UTC m=+0.118029155 container start b4270abf6033cf0ea4fb9103dede2eb0c20f3884bf0a031835aac53a65fff197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brattain, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 23:47:12 np0005480824 podman[269274]: 2025-10-11 03:47:12.016554798 +0000 UTC m=+0.122368877 container attach b4270abf6033cf0ea4fb9103dede2eb0c20f3884bf0a031835aac53a65fff197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brattain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 10 23:47:12 np0005480824 confident_brattain[269291]: 167 167
Oct 10 23:47:12 np0005480824 systemd[1]: libpod-b4270abf6033cf0ea4fb9103dede2eb0c20f3884bf0a031835aac53a65fff197.scope: Deactivated successfully.
Oct 10 23:47:12 np0005480824 conmon[269291]: conmon b4270abf6033cf0ea4fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4270abf6033cf0ea4fb9103dede2eb0c20f3884bf0a031835aac53a65fff197.scope/container/memory.events
Oct 10 23:47:12 np0005480824 podman[269274]: 2025-10-11 03:47:12.021532396 +0000 UTC m=+0.127346455 container died b4270abf6033cf0ea4fb9103dede2eb0c20f3884bf0a031835aac53a65fff197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:47:12 np0005480824 systemd[1]: var-lib-containers-storage-overlay-922789c47e5b0185fd8cf052e5a52d3515c69117ffb9e75bf86d7e88abb213dd-merged.mount: Deactivated successfully.
Oct 10 23:47:12 np0005480824 podman[269274]: 2025-10-11 03:47:12.054896013 +0000 UTC m=+0.160710072 container remove b4270abf6033cf0ea4fb9103dede2eb0c20f3884bf0a031835aac53a65fff197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brattain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:47:12 np0005480824 systemd[1]: libpod-conmon-b4270abf6033cf0ea4fb9103dede2eb0c20f3884bf0a031835aac53a65fff197.scope: Deactivated successfully.
Oct 10 23:47:12 np0005480824 podman[269314]: 2025-10-11 03:47:12.248585361 +0000 UTC m=+0.043777133 container create c9781a160013ed4af6ca90aa64313d4f46705659901bdc8d6d4f9b5f4cdae094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hellman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:47:12 np0005480824 systemd[1]: Started libpod-conmon-c9781a160013ed4af6ca90aa64313d4f46705659901bdc8d6d4f9b5f4cdae094.scope.
Oct 10 23:47:12 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:47:12 np0005480824 podman[269314]: 2025-10-11 03:47:12.22648518 +0000 UTC m=+0.021676972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:47:12 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43423dda7b736c309313f5c7710c851edb24a56bad834c6f685db92994dd85c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:47:12 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43423dda7b736c309313f5c7710c851edb24a56bad834c6f685db92994dd85c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:47:12 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43423dda7b736c309313f5c7710c851edb24a56bad834c6f685db92994dd85c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:47:12 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43423dda7b736c309313f5c7710c851edb24a56bad834c6f685db92994dd85c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:47:12 np0005480824 podman[269314]: 2025-10-11 03:47:12.336915125 +0000 UTC m=+0.132106927 container init c9781a160013ed4af6ca90aa64313d4f46705659901bdc8d6d4f9b5f4cdae094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hellman, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 23:47:12 np0005480824 podman[269314]: 2025-10-11 03:47:12.351662673 +0000 UTC m=+0.146854485 container start c9781a160013ed4af6ca90aa64313d4f46705659901bdc8d6d4f9b5f4cdae094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hellman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 23:47:12 np0005480824 podman[269314]: 2025-10-11 03:47:12.357601793 +0000 UTC m=+0.152793565 container attach c9781a160013ed4af6ca90aa64313d4f46705659901bdc8d6d4f9b5f4cdae094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hellman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:47:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 3.7 KiB/s wr, 81 op/s
Oct 10 23:47:13 np0005480824 happy_hellman[269330]: {
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "osd_id": 0,
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "type": "bluestore"
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:    },
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "osd_id": 1,
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "type": "bluestore"
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:    },
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "osd_id": 2,
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:        "type": "bluestore"
Oct 10 23:47:13 np0005480824 happy_hellman[269330]:    }
Oct 10 23:47:13 np0005480824 happy_hellman[269330]: }
Oct 10 23:47:13 np0005480824 systemd[1]: libpod-c9781a160013ed4af6ca90aa64313d4f46705659901bdc8d6d4f9b5f4cdae094.scope: Deactivated successfully.
Oct 10 23:47:13 np0005480824 podman[269314]: 2025-10-11 03:47:13.332123499 +0000 UTC m=+1.127315281 container died c9781a160013ed4af6ca90aa64313d4f46705659901bdc8d6d4f9b5f4cdae094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 23:47:13 np0005480824 systemd[1]: var-lib-containers-storage-overlay-43423dda7b736c309313f5c7710c851edb24a56bad834c6f685db92994dd85c8-merged.mount: Deactivated successfully.
Oct 10 23:47:13 np0005480824 nova_compute[260089]: 2025-10-11 03:47:13.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:13 np0005480824 podman[269314]: 2025-10-11 03:47:13.398986287 +0000 UTC m=+1.194178069 container remove c9781a160013ed4af6ca90aa64313d4f46705659901bdc8d6d4f9b5f4cdae094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hellman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:47:13 np0005480824 systemd[1]: libpod-conmon-c9781a160013ed4af6ca90aa64313d4f46705659901bdc8d6d4f9b5f4cdae094.scope: Deactivated successfully.
Oct 10 23:47:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:47:13 np0005480824 nova_compute[260089]: 2025-10-11 03:47:13.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:13 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:47:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:47:13 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:47:13 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev c3e1fd47-9d66-4142-910b-8606f03120cb does not exist
Oct 10 23:47:13 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 276805af-4be1-44a7-b5ba-fe89fafee2df does not exist
Oct 10 23:47:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct 10 23:47:13 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:47:13 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:47:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct 10 23:47:13 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct 10 23:47:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:47:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 4.1 KiB/s wr, 79 op/s
Oct 10 23:47:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/192565074' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/192565074' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct 10 23:47:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct 10 23:47:15 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct 10 23:47:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 KiB/s wr, 26 op/s
Oct 10 23:47:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4279179944' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4279179944' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:17 np0005480824 podman[269429]: 2025-10-11 03:47:17.035882091 +0000 UTC m=+0.087515025 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 10 23:47:17 np0005480824 podman[269430]: 2025-10-11 03:47:17.038537733 +0000 UTC m=+0.094010348 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:47:18 np0005480824 nova_compute[260089]: 2025-10-11 03:47:18.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:18 np0005480824 nova_compute[260089]: 2025-10-11 03:47:18.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:18 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1855495836' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:18 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1855495836' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 4.4 KiB/s wr, 92 op/s
Oct 10 23:47:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:47:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct 10 23:47:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct 10 23:47:19 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct 10 23:47:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.1 KiB/s wr, 77 op/s
Oct 10 23:47:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:47:21.370 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:47:21 np0005480824 nova_compute[260089]: 2025-10-11 03:47:21.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:47:21.372 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:47:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.0 KiB/s wr, 83 op/s
Oct 10 23:47:22 np0005480824 ovn_controller[152667]: 2025-10-11T03:47:22Z|00035|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 10 23:47:23 np0005480824 nova_compute[260089]: 2025-10-11 03:47:23.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:23 np0005480824 nova_compute[260089]: 2025-10-11 03:47:23.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:47:24.374 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:47:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:47:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 2.7 KiB/s wr, 78 op/s
Oct 10 23:47:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1394465583' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1394465583' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:25 np0005480824 podman[269467]: 2025-10-11 03:47:25.077908133 +0000 UTC m=+0.135075579 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:47:25 np0005480824 nova_compute[260089]: 2025-10-11 03:47:25.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:47:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3083931014' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3083931014' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.4 KiB/s wr, 68 op/s
Oct 10 23:47:27 np0005480824 nova_compute[260089]: 2025-10-11 03:47:27.293 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:47:27 np0005480824 nova_compute[260089]: 2025-10-11 03:47:27.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:47:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:47:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:47:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:47:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:47:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:47:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:47:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:47:27
Oct 10 23:47:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:47:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:47:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.mgr', '.rgw.root', 'images', 'default.rgw.control']
Oct 10 23:47:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:47:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:47:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:47:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:47:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:47:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:47:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:47:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:47:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:47:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:47:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:47:28 np0005480824 nova_compute[260089]: 2025-10-11 03:47:28.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:47:28 np0005480824 nova_compute[260089]: 2025-10-11 03:47:28.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:47:28 np0005480824 nova_compute[260089]: 2025-10-11 03:47:28.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:47:28 np0005480824 nova_compute[260089]: 2025-10-11 03:47:28.318 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 23:47:28 np0005480824 nova_compute[260089]: 2025-10-11 03:47:28.318 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:47:28 np0005480824 nova_compute[260089]: 2025-10-11 03:47:28.318 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:47:28 np0005480824 nova_compute[260089]: 2025-10-11 03:47:28.318 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:47:28 np0005480824 nova_compute[260089]: 2025-10-11 03:47:28.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:28 np0005480824 nova_compute[260089]: 2025-10-11 03:47:28.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 KiB/s wr, 35 op/s
Oct 10 23:47:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3665587072' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3665587072' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:29 np0005480824 nova_compute[260089]: 2025-10-11 03:47:29.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:47:29 np0005480824 nova_compute[260089]: 2025-10-11 03:47:29.314 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:47:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:47:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1189206280' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1189206280' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:30 np0005480824 nova_compute[260089]: 2025-10-11 03:47:30.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:47:30 np0005480824 nova_compute[260089]: 2025-10-11 03:47:30.332 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:47:30 np0005480824 nova_compute[260089]: 2025-10-11 03:47:30.333 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:47:30 np0005480824 nova_compute[260089]: 2025-10-11 03:47:30.333 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:47:30 np0005480824 nova_compute[260089]: 2025-10-11 03:47:30.334 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:47:30 np0005480824 nova_compute[260089]: 2025-10-11 03:47:30.334 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:47:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 32 op/s
Oct 10 23:47:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:47:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909613393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:47:30 np0005480824 nova_compute[260089]: 2025-10-11 03:47:30.805 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:47:30 np0005480824 podman[269516]: 2025-10-11 03:47:30.942574514 +0000 UTC m=+0.088425657 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 10 23:47:31 np0005480824 nova_compute[260089]: 2025-10-11 03:47:31.037 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:47:31 np0005480824 nova_compute[260089]: 2025-10-11 03:47:31.039 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4768MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:47:31 np0005480824 nova_compute[260089]: 2025-10-11 03:47:31.040 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:47:31 np0005480824 nova_compute[260089]: 2025-10-11 03:47:31.040 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:47:31 np0005480824 nova_compute[260089]: 2025-10-11 03:47:31.103 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:47:31 np0005480824 nova_compute[260089]: 2025-10-11 03:47:31.104 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:47:31 np0005480824 nova_compute[260089]: 2025-10-11 03:47:31.123 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:47:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:47:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3194349861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:47:31 np0005480824 nova_compute[260089]: 2025-10-11 03:47:31.607 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:47:31 np0005480824 nova_compute[260089]: 2025-10-11 03:47:31.617 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:47:31 np0005480824 nova_compute[260089]: 2025-10-11 03:47:31.636 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:47:31 np0005480824 nova_compute[260089]: 2025-10-11 03:47:31.664 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:47:31 np0005480824 nova_compute[260089]: 2025-10-11 03:47:31.665 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:47:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.2 KiB/s wr, 57 op/s
Oct 10 23:47:33 np0005480824 nova_compute[260089]: 2025-10-11 03:47:33.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:33 np0005480824 nova_compute[260089]: 2025-10-11 03:47:33.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4030256640' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4030256640' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:33 np0005480824 nova_compute[260089]: 2025-10-11 03:47:33.666 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:47:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:47:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Oct 10 23:47:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 KiB/s wr, 44 op/s
Oct 10 23:47:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2582140032' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2582140032' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:47:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:47:38 np0005480824 nova_compute[260089]: 2025-10-11 03:47:38.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:38 np0005480824 nova_compute[260089]: 2025-10-11 03:47:38.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.7 KiB/s wr, 58 op/s
Oct 10 23:47:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:47:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Oct 10 23:47:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct 10 23:47:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct 10 23:47:41 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct 10 23:47:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.0 KiB/s wr, 36 op/s
Oct 10 23:47:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1313338492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1313338492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:43 np0005480824 nova_compute[260089]: 2025-10-11 03:47:43.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:43 np0005480824 nova_compute[260089]: 2025-10-11 03:47:43.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct 10 23:47:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct 10 23:47:43 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct 10 23:47:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/54308313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/54308313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:47:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Oct 10 23:47:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1540663219' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1540663219' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1409722833' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1409722833' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.4 KiB/s wr, 34 op/s
Oct 10 23:47:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/263291716' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/263291716' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct 10 23:47:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct 10 23:47:48 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct 10 23:47:48 np0005480824 podman[269558]: 2025-10-11 03:47:48.048183024 +0000 UTC m=+0.087001784 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 10 23:47:48 np0005480824 podman[269557]: 2025-10-11 03:47:48.086544849 +0000 UTC m=+0.131548784 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 23:47:48 np0005480824 nova_compute[260089]: 2025-10-11 03:47:48.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:48 np0005480824 nova_compute[260089]: 2025-10-11 03:47:48.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 6.1 KiB/s wr, 143 op/s
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.054438) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154469054531, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1295, "num_deletes": 256, "total_data_size": 1641575, "memory_usage": 1675704, "flush_reason": "Manual Compaction"}
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154469067461, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1620553, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19918, "largest_seqno": 21212, "table_properties": {"data_size": 1614402, "index_size": 3354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14221, "raw_average_key_size": 20, "raw_value_size": 1601611, "raw_average_value_size": 2338, "num_data_blocks": 150, "num_entries": 685, "num_filter_entries": 685, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154385, "oldest_key_time": 1760154385, "file_creation_time": 1760154469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 13063 microseconds, and 8131 cpu microseconds.
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.067518) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1620553 bytes OK
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.067544) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.069652) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.069673) EVENT_LOG_v1 {"time_micros": 1760154469069667, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.069700) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1635554, prev total WAL file size 1635554, number of live WAL files 2.
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.070649) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1582KB)], [47(7397KB)]
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154469071398, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9195640, "oldest_snapshot_seqno": -1}
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4462 keys, 7441561 bytes, temperature: kUnknown
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154469125121, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7441561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7410301, "index_size": 18990, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 110797, "raw_average_key_size": 24, "raw_value_size": 7328115, "raw_average_value_size": 1642, "num_data_blocks": 791, "num_entries": 4462, "num_filter_entries": 4462, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760154469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.125488) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7441561 bytes
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.129816) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.9 rd, 138.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.2 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(10.3) write-amplify(4.6) OK, records in: 4986, records dropped: 524 output_compression: NoCompression
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.129840) EVENT_LOG_v1 {"time_micros": 1760154469129827, "job": 24, "event": "compaction_finished", "compaction_time_micros": 53819, "compaction_time_cpu_micros": 36103, "output_level": 6, "num_output_files": 1, "total_output_size": 7441561, "num_input_records": 4986, "num_output_records": 4462, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154469130357, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154469132373, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.070523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.132497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.132508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.132511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.132515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:47:49.132518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct 10 23:47:49 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct 10 23:47:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 4.0 KiB/s wr, 123 op/s
Oct 10 23:47:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3719027620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3719027620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 5.0 KiB/s wr, 127 op/s
Oct 10 23:47:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974413409' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974413409' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3564043064' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3564043064' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:53 np0005480824 nova_compute[260089]: 2025-10-11 03:47:53.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:53 np0005480824 nova_compute[260089]: 2025-10-11 03:47:53.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:47:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 6.5 KiB/s wr, 146 op/s
Oct 10 23:47:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:47:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2228129631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:47:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:47:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2228129631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:47:56 np0005480824 podman[269597]: 2025-10-11 03:47:56.09683982 +0000 UTC m=+0.139006849 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Oct 10 23:47:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 3.1 KiB/s wr, 42 op/s
Oct 10 23:47:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:47:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:47:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:47:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:47:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:47:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:47:58 np0005480824 nova_compute[260089]: 2025-10-11 03:47:58.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:58 np0005480824 nova_compute[260089]: 2025-10-11 03:47:58.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:47:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 3.6 KiB/s wr, 82 op/s
Oct 10 23:47:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:48:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 3.3 KiB/s wr, 74 op/s
Oct 10 23:48:02 np0005480824 podman[269625]: 2025-10-11 03:48:02.0529422 +0000 UTC m=+0.104727911 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:48:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.1 KiB/s wr, 75 op/s
Oct 10 23:48:03 np0005480824 nova_compute[260089]: 2025-10-11 03:48:03.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:03 np0005480824 nova_compute[260089]: 2025-10-11 03:48:03.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:48:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.9 KiB/s wr, 58 op/s
Oct 10 23:48:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 938 B/s wr, 45 op/s
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.181 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "95bebf7e-4285-4364-8951-6f2305250e86" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.182 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "95bebf7e-4285-4364-8951-6f2305250e86" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.208 2 DEBUG nova.compute.manager [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.285 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.286 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.297 2 DEBUG nova.virt.hardware [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.298 2 INFO nova.compute.claims [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:48:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:48:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2003711291' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:48:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:48:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2003711291' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.408 2 DEBUG oslo_concurrency.processutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Oct 10 23:48:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:48:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4086671481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.870 2 DEBUG oslo_concurrency.processutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.879 2 DEBUG nova.compute.provider_tree [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.905 2 DEBUG nova.scheduler.client.report [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.937 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.939 2 DEBUG nova.compute.manager [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.989 2 DEBUG nova.compute.manager [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:48:08 np0005480824 nova_compute[260089]: 2025-10-11 03:48:08.990 2 DEBUG nova.network.neutron [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.017 2 INFO nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.036 2 DEBUG nova.compute.manager [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.120 2 DEBUG nova.compute.manager [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.121 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.122 2 INFO nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Creating image(s)#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.150 2 DEBUG nova.storage.rbd_utils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] rbd image 95bebf7e-4285-4364-8951-6f2305250e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.201 2 DEBUG nova.storage.rbd_utils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] rbd image 95bebf7e-4285-4364-8951-6f2305250e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.248 2 DEBUG nova.storage.rbd_utils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] rbd image 95bebf7e-4285-4364-8951-6f2305250e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.254 2 DEBUG oslo_concurrency.processutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.294 2 DEBUG nova.policy [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '94437cd815c640b094db68c0d14ae5c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8004df44ba5045b6b3c7b5376587d790', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.350 2 DEBUG oslo_concurrency.processutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.352 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.353 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.354 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.395 2 DEBUG nova.storage.rbd_utils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] rbd image 95bebf7e-4285-4364-8951-6f2305250e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.404 2 DEBUG oslo_concurrency.processutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 95bebf7e-4285-4364-8951-6f2305250e86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:48:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:48:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3570982148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:48:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:48:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3570982148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.700 2 DEBUG oslo_concurrency.processutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 95bebf7e-4285-4364-8951-6f2305250e86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.771 2 DEBUG nova.storage.rbd_utils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] resizing rbd image 95bebf7e-4285-4364-8951-6f2305250e86_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.876 2 DEBUG nova.objects.instance [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lazy-loading 'migration_context' on Instance uuid 95bebf7e-4285-4364-8951-6f2305250e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.891 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.891 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Ensure instance console log exists: /var/lib/nova/instances/95bebf7e-4285-4364-8951-6f2305250e86/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.892 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.892 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:09 np0005480824 nova_compute[260089]: 2025-10-11 03:48:09.892 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:10 np0005480824 nova_compute[260089]: 2025-10-11 03:48:10.439 2 DEBUG nova.network.neutron [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Successfully created port: e93e2cc3-a047-4935-9a80-20828ed03219 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:48:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:10.488 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:10.489 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:10.489 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 43 op/s
Oct 10 23:48:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 77 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 84 op/s
Oct 10 23:48:12 np0005480824 nova_compute[260089]: 2025-10-11 03:48:12.566 2 DEBUG nova.network.neutron [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Successfully updated port: e93e2cc3-a047-4935-9a80-20828ed03219 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:48:12 np0005480824 nova_compute[260089]: 2025-10-11 03:48:12.583 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "refresh_cache-95bebf7e-4285-4364-8951-6f2305250e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:48:12 np0005480824 nova_compute[260089]: 2025-10-11 03:48:12.583 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquired lock "refresh_cache-95bebf7e-4285-4364-8951-6f2305250e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:48:12 np0005480824 nova_compute[260089]: 2025-10-11 03:48:12.583 2 DEBUG nova.network.neutron [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:48:12 np0005480824 nova_compute[260089]: 2025-10-11 03:48:12.729 2 DEBUG nova.compute.manager [req-54d5a4f4-bae0-425f-b3a9-54746ee48b5e req-077ebbe7-33ac-43e7-b525-3e693fef47ac 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Received event network-changed-e93e2cc3-a047-4935-9a80-20828ed03219 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:12 np0005480824 nova_compute[260089]: 2025-10-11 03:48:12.730 2 DEBUG nova.compute.manager [req-54d5a4f4-bae0-425f-b3a9-54746ee48b5e req-077ebbe7-33ac-43e7-b525-3e693fef47ac 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Refreshing instance network info cache due to event network-changed-e93e2cc3-a047-4935-9a80-20828ed03219. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:48:12 np0005480824 nova_compute[260089]: 2025-10-11 03:48:12.731 2 DEBUG oslo_concurrency.lockutils [req-54d5a4f4-bae0-425f-b3a9-54746ee48b5e req-077ebbe7-33ac-43e7-b525-3e693fef47ac 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-95bebf7e-4285-4364-8951-6f2305250e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:48:12 np0005480824 nova_compute[260089]: 2025-10-11 03:48:12.831 2 DEBUG nova.network.neutron [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:48:13 np0005480824 nova_compute[260089]: 2025-10-11 03:48:13.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:13 np0005480824 nova_compute[260089]: 2025-10-11 03:48:13.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:48:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 3.5 MiB/s wr, 92 op/s
Oct 10 23:48:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:48:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:48:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:48:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:48:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:48:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:48:14 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 182c648d-42f7-41cc-8ae7-255f1641eb74 does not exist
Oct 10 23:48:14 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev df6fd135-35c1-45c0-8f82-be2405e76d04 does not exist
Oct 10 23:48:14 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 4f8bac72-ed9c-444b-8ad7-94ae76131067 does not exist
Oct 10 23:48:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:48:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:48:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:48:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:48:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:48:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.343 2 DEBUG nova.network.neutron [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Updating instance_info_cache with network_info: [{"id": "e93e2cc3-a047-4935-9a80-20828ed03219", "address": "fa:16:3e:9d:c7:eb", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape93e2cc3-a0", "ovs_interfaceid": "e93e2cc3-a047-4935-9a80-20828ed03219", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.365 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Releasing lock "refresh_cache-95bebf7e-4285-4364-8951-6f2305250e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.366 2 DEBUG nova.compute.manager [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Instance network_info: |[{"id": "e93e2cc3-a047-4935-9a80-20828ed03219", "address": "fa:16:3e:9d:c7:eb", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape93e2cc3-a0", "ovs_interfaceid": "e93e2cc3-a047-4935-9a80-20828ed03219", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.366 2 DEBUG oslo_concurrency.lockutils [req-54d5a4f4-bae0-425f-b3a9-54746ee48b5e req-077ebbe7-33ac-43e7-b525-3e693fef47ac 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-95bebf7e-4285-4364-8951-6f2305250e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.367 2 DEBUG nova.network.neutron [req-54d5a4f4-bae0-425f-b3a9-54746ee48b5e req-077ebbe7-33ac-43e7-b525-3e693fef47ac 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Refreshing network info cache for port e93e2cc3-a047-4935-9a80-20828ed03219 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.373 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Start _get_guest_xml network_info=[{"id": "e93e2cc3-a047-4935-9a80-20828ed03219", "address": "fa:16:3e:9d:c7:eb", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape93e2cc3-a0", "ovs_interfaceid": "e93e2cc3-a047-4935-9a80-20828ed03219", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.380 2 WARNING nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.385 2 DEBUG nova.virt.libvirt.host [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.387 2 DEBUG nova.virt.libvirt.host [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.392 2 DEBUG nova.virt.libvirt.host [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.392 2 DEBUG nova.virt.libvirt.host [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.393 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.394 2 DEBUG nova.virt.hardware [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.395 2 DEBUG nova.virt.hardware [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.395 2 DEBUG nova.virt.hardware [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.396 2 DEBUG nova.virt.hardware [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.396 2 DEBUG nova.virt.hardware [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.397 2 DEBUG nova.virt.hardware [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.397 2 DEBUG nova.virt.hardware [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.398 2 DEBUG nova.virt.hardware [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.398 2 DEBUG nova.virt.hardware [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.399 2 DEBUG nova.virt.hardware [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.399 2 DEBUG nova.virt.hardware [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.405 2 DEBUG oslo_concurrency.processutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:15 np0005480824 podman[270113]: 2025-10-11 03:48:15.605936401 +0000 UTC m=+0.067711379 container create 32b7651176185ddb4a0186fcddc0c82ebee20d54d0a24ce64b9ef8aebe1640f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mahavira, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 23:48:15 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:48:15 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:48:15 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:48:15 np0005480824 systemd[1]: Started libpod-conmon-32b7651176185ddb4a0186fcddc0c82ebee20d54d0a24ce64b9ef8aebe1640f9.scope.
Oct 10 23:48:15 np0005480824 podman[270113]: 2025-10-11 03:48:15.574515809 +0000 UTC m=+0.036290797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:48:15 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:48:15 np0005480824 podman[270113]: 2025-10-11 03:48:15.70173858 +0000 UTC m=+0.163513608 container init 32b7651176185ddb4a0186fcddc0c82ebee20d54d0a24ce64b9ef8aebe1640f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mahavira, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:48:15 np0005480824 podman[270113]: 2025-10-11 03:48:15.710827455 +0000 UTC m=+0.172602403 container start 32b7651176185ddb4a0186fcddc0c82ebee20d54d0a24ce64b9ef8aebe1640f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mahavira, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:48:15 np0005480824 podman[270113]: 2025-10-11 03:48:15.714541552 +0000 UTC m=+0.176316580 container attach 32b7651176185ddb4a0186fcddc0c82ebee20d54d0a24ce64b9ef8aebe1640f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mahavira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:48:15 np0005480824 infallible_mahavira[270146]: 167 167
Oct 10 23:48:15 np0005480824 systemd[1]: libpod-32b7651176185ddb4a0186fcddc0c82ebee20d54d0a24ce64b9ef8aebe1640f9.scope: Deactivated successfully.
Oct 10 23:48:15 np0005480824 podman[270113]: 2025-10-11 03:48:15.718941896 +0000 UTC m=+0.180716914 container died 32b7651176185ddb4a0186fcddc0c82ebee20d54d0a24ce64b9ef8aebe1640f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:48:15 np0005480824 systemd[1]: var-lib-containers-storage-overlay-f17291711e42f581cbb3768305747434f508e0c5d7815fd8f748d3947474e083-merged.mount: Deactivated successfully.
Oct 10 23:48:15 np0005480824 podman[270113]: 2025-10-11 03:48:15.778260546 +0000 UTC m=+0.240035524 container remove 32b7651176185ddb4a0186fcddc0c82ebee20d54d0a24ce64b9ef8aebe1640f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mahavira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct 10 23:48:15 np0005480824 systemd[1]: libpod-conmon-32b7651176185ddb4a0186fcddc0c82ebee20d54d0a24ce64b9ef8aebe1640f9.scope: Deactivated successfully.
Oct 10 23:48:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:48:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1786925438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.877 2 DEBUG oslo_concurrency.processutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.905 2 DEBUG nova.storage.rbd_utils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] rbd image 95bebf7e-4285-4364-8951-6f2305250e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:15 np0005480824 nova_compute[260089]: 2025-10-11 03:48:15.911 2 DEBUG oslo_concurrency.processutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:16 np0005480824 podman[270192]: 2025-10-11 03:48:16.004404899 +0000 UTC m=+0.057539248 container create 02ad5623394b991b16ab671fd91bfa4c9c69c66a5b7f738a1561b9673775c207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_newton, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 23:48:16 np0005480824 podman[270192]: 2025-10-11 03:48:15.9760114 +0000 UTC m=+0.029145849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:48:16 np0005480824 systemd[1]: Started libpod-conmon-02ad5623394b991b16ab671fd91bfa4c9c69c66a5b7f738a1561b9673775c207.scope.
Oct 10 23:48:16 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:48:16 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeaddeef257724eec3c8b9a705aba8593d65d17ac517973e2a1d115eaa46e64f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:16 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeaddeef257724eec3c8b9a705aba8593d65d17ac517973e2a1d115eaa46e64f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:16 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeaddeef257724eec3c8b9a705aba8593d65d17ac517973e2a1d115eaa46e64f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:16 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeaddeef257724eec3c8b9a705aba8593d65d17ac517973e2a1d115eaa46e64f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:16 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeaddeef257724eec3c8b9a705aba8593d65d17ac517973e2a1d115eaa46e64f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:16 np0005480824 podman[270192]: 2025-10-11 03:48:16.136153827 +0000 UTC m=+0.189288266 container init 02ad5623394b991b16ab671fd91bfa4c9c69c66a5b7f738a1561b9673775c207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 10 23:48:16 np0005480824 podman[270192]: 2025-10-11 03:48:16.14474555 +0000 UTC m=+0.197879909 container start 02ad5623394b991b16ab671fd91bfa4c9c69c66a5b7f738a1561b9673775c207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_newton, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:48:16 np0005480824 podman[270192]: 2025-10-11 03:48:16.148389996 +0000 UTC m=+0.201524415 container attach 02ad5623394b991b16ab671fd91bfa4c9c69c66a5b7f738a1561b9673775c207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 23:48:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:48:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1610818163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.368 2 DEBUG oslo_concurrency.processutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.372 2 DEBUG nova.virt.libvirt.vif [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:48:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1927354948',display_name='tempest-VolumesActionsTest-instance-1927354948',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1927354948',id=2,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8004df44ba5045b6b3c7b5376587d790',ramdisk_id='',reservation_id='r-g0x57fch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1212390094',owner_user_name='tempest-VolumesActionsTest-1212390094-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:48:09Z,user_data=None,user_id='94437cd815c640b094db68c0d14ae5c0',uuid=95bebf7e-4285-4364-8951-6f2305250e86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e93e2cc3-a047-4935-9a80-20828ed03219", "address": "fa:16:3e:9d:c7:eb", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape93e2cc3-a0", "ovs_interfaceid": "e93e2cc3-a047-4935-9a80-20828ed03219", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.373 2 DEBUG nova.network.os_vif_util [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Converting VIF {"id": "e93e2cc3-a047-4935-9a80-20828ed03219", "address": "fa:16:3e:9d:c7:eb", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape93e2cc3-a0", "ovs_interfaceid": "e93e2cc3-a047-4935-9a80-20828ed03219", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.375 2 DEBUG nova.network.os_vif_util [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:c7:eb,bridge_name='br-int',has_traffic_filtering=True,id=e93e2cc3-a047-4935-9a80-20828ed03219,network=Network(a3ff9216-2127-4ae5-9ba6-73e4685bcdc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape93e2cc3-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.379 2 DEBUG nova.objects.instance [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lazy-loading 'pci_devices' on Instance uuid 95bebf7e-4285-4364-8951-6f2305250e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.400 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  <uuid>95bebf7e-4285-4364-8951-6f2305250e86</uuid>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  <name>instance-00000002</name>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <nova:name>tempest-VolumesActionsTest-instance-1927354948</nova:name>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:48:15</nova:creationTime>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:        <nova:user uuid="94437cd815c640b094db68c0d14ae5c0">tempest-VolumesActionsTest-1212390094-project-member</nova:user>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:        <nova:project uuid="8004df44ba5045b6b3c7b5376587d790">tempest-VolumesActionsTest-1212390094</nova:project>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:        <nova:port uuid="e93e2cc3-a047-4935-9a80-20828ed03219">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <entry name="serial">95bebf7e-4285-4364-8951-6f2305250e86</entry>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <entry name="uuid">95bebf7e-4285-4364-8951-6f2305250e86</entry>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/95bebf7e-4285-4364-8951-6f2305250e86_disk">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/95bebf7e-4285-4364-8951-6f2305250e86_disk.config">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:9d:c7:eb"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <target dev="tape93e2cc3-a0"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/95bebf7e-4285-4364-8951-6f2305250e86/console.log" append="off"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:48:16 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:48:16 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:48:16 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:48:16 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.403 2 DEBUG nova.compute.manager [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Preparing to wait for external event network-vif-plugged-e93e2cc3-a047-4935-9a80-20828ed03219 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.403 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "95bebf7e-4285-4364-8951-6f2305250e86-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.404 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "95bebf7e-4285-4364-8951-6f2305250e86-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.404 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "95bebf7e-4285-4364-8951-6f2305250e86-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.405 2 DEBUG nova.virt.libvirt.vif [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:48:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1927354948',display_name='tempest-VolumesActionsTest-instance-1927354948',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1927354948',id=2,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8004df44ba5045b6b3c7b5376587d790',ramdisk_id='',reservation_id='r-g0x57fch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1212390094',owner_user_name='tempest-VolumesActionsTest-1212390094-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:48:09Z,user_data=None,user_id='94437cd815c640b094db68c0d14ae5c0',uuid=95bebf7e-4285-4364-8951-6f2305250e86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e93e2cc3-a047-4935-9a80-20828ed03219", "address": "fa:16:3e:9d:c7:eb", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape93e2cc3-a0", "ovs_interfaceid": "e93e2cc3-a047-4935-9a80-20828ed03219", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.406 2 DEBUG nova.network.os_vif_util [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Converting VIF {"id": "e93e2cc3-a047-4935-9a80-20828ed03219", "address": "fa:16:3e:9d:c7:eb", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape93e2cc3-a0", "ovs_interfaceid": "e93e2cc3-a047-4935-9a80-20828ed03219", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.407 2 DEBUG nova.network.os_vif_util [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:c7:eb,bridge_name='br-int',has_traffic_filtering=True,id=e93e2cc3-a047-4935-9a80-20828ed03219,network=Network(a3ff9216-2127-4ae5-9ba6-73e4685bcdc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape93e2cc3-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.408 2 DEBUG os_vif [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:c7:eb,bridge_name='br-int',has_traffic_filtering=True,id=e93e2cc3-a047-4935-9a80-20828ed03219,network=Network(a3ff9216-2127-4ae5-9ba6-73e4685bcdc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape93e2cc3-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape93e2cc3-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape93e2cc3-a0, col_values=(('external_ids', {'iface-id': 'e93e2cc3-a047-4935-9a80-20828ed03219', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:c7:eb', 'vm-uuid': '95bebf7e-4285-4364-8951-6f2305250e86'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:16 np0005480824 NetworkManager[44969]: <info>  [1760154496.4224] manager: (tape93e2cc3-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.436 2 INFO os_vif [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:c7:eb,bridge_name='br-int',has_traffic_filtering=True,id=e93e2cc3-a047-4935-9a80-20828ed03219,network=Network(a3ff9216-2127-4ae5-9ba6-73e4685bcdc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape93e2cc3-a0')#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.491 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.491 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.492 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] No VIF found with MAC fa:16:3e:9d:c7:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.492 2 INFO nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Using config drive#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.520 2 DEBUG nova.storage.rbd_utils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] rbd image 95bebf7e-4285-4364-8951-6f2305250e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 3.5 MiB/s wr, 92 op/s
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.871 2 INFO nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Creating config drive at /var/lib/nova/instances/95bebf7e-4285-4364-8951-6f2305250e86/disk.config#033[00m
Oct 10 23:48:16 np0005480824 nova_compute[260089]: 2025-10-11 03:48:16.878 2 DEBUG oslo_concurrency.processutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95bebf7e-4285-4364-8951-6f2305250e86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu9tzer8i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.012 2 DEBUG oslo_concurrency.processutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95bebf7e-4285-4364-8951-6f2305250e86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu9tzer8i" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.038 2 DEBUG nova.storage.rbd_utils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] rbd image 95bebf7e-4285-4364-8951-6f2305250e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.042 2 DEBUG oslo_concurrency.processutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95bebf7e-4285-4364-8951-6f2305250e86/disk.config 95bebf7e-4285-4364-8951-6f2305250e86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.189 2 DEBUG oslo_concurrency.processutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95bebf7e-4285-4364-8951-6f2305250e86/disk.config 95bebf7e-4285-4364-8951-6f2305250e86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.190 2 INFO nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Deleting local config drive /var/lib/nova/instances/95bebf7e-4285-4364-8951-6f2305250e86/disk.config because it was imported into RBD.#033[00m
Oct 10 23:48:17 np0005480824 kernel: tape93e2cc3-a0: entered promiscuous mode
Oct 10 23:48:17 np0005480824 NetworkManager[44969]: <info>  [1760154497.2469] manager: (tape93e2cc3-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Oct 10 23:48:17 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:17Z|00036|binding|INFO|Claiming lport e93e2cc3-a047-4935-9a80-20828ed03219 for this chassis.
Oct 10 23:48:17 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:17Z|00037|binding|INFO|e93e2cc3-a047-4935-9a80-20828ed03219: Claiming fa:16:3e:9d:c7:eb 10.100.0.7
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.265 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:c7:eb 10.100.0.7'], port_security=['fa:16:3e:9d:c7:eb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '95bebf7e-4285-4364-8951-6f2305250e86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8004df44ba5045b6b3c7b5376587d790', 'neutron:revision_number': '2', 'neutron:security_group_ids': '431a4db7-da14-43f2-a384-76fa67eaa106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aab757e7-f51f-43c8-93d5-f3007ce26421, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=e93e2cc3-a047-4935-9a80-20828ed03219) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.266 162245 INFO neutron.agent.ovn.metadata.agent [-] Port e93e2cc3-a047-4935-9a80-20828ed03219 in datapath a3ff9216-2127-4ae5-9ba6-73e4685bcdc7 bound to our chassis#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.267 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3ff9216-2127-4ae5-9ba6-73e4685bcdc7#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.282 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc1e090-3764-4282-872b-b5ce51014242]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.283 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3ff9216-21 in ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.285 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3ff9216-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.285 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7ba1ec-fb5c-4309-99db-68f3d8a4997d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.286 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[52a8daf7-e558-41a9-a31a-3189ef2b5434]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 systemd-machined[215071]: New machine qemu-2-instance-00000002.
Oct 10 23:48:17 np0005480824 festive_newton[270227]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:48:17 np0005480824 festive_newton[270227]: --> relative data size: 1.0
Oct 10 23:48:17 np0005480824 festive_newton[270227]: --> All data devices are unavailable
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.298 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[044936ea-fcdd-4b8b-9712-492ffcb3c987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 systemd-udevd[270333]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:48:17 np0005480824 NetworkManager[44969]: <info>  [1760154497.3219] device (tape93e2cc3-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:48:17 np0005480824 NetworkManager[44969]: <info>  [1760154497.3232] device (tape93e2cc3-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:17 np0005480824 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.326 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[3af95c4a-0e1e-437d-af7c-5199ad3ff716]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:17 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:17Z|00038|binding|INFO|Setting lport e93e2cc3-a047-4935-9a80-20828ed03219 ovn-installed in OVS
Oct 10 23:48:17 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:17Z|00039|binding|INFO|Setting lport e93e2cc3-a047-4935-9a80-20828ed03219 up in Southbound
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:17 np0005480824 systemd[1]: libpod-02ad5623394b991b16ab671fd91bfa4c9c69c66a5b7f738a1561b9673775c207.scope: Deactivated successfully.
Oct 10 23:48:17 np0005480824 systemd[1]: libpod-02ad5623394b991b16ab671fd91bfa4c9c69c66a5b7f738a1561b9673775c207.scope: Consumed 1.097s CPU time.
Oct 10 23:48:17 np0005480824 conmon[270227]: conmon 02ad5623394b991b16ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-02ad5623394b991b16ab671fd91bfa4c9c69c66a5b7f738a1561b9673775c207.scope/container/memory.events
Oct 10 23:48:17 np0005480824 podman[270192]: 2025-10-11 03:48:17.345574204 +0000 UTC m=+1.398708563 container died 02ad5623394b991b16ab671fd91bfa4c9c69c66a5b7f738a1561b9673775c207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_newton, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.370 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[84b5b028-47f6-45d1-99cd-1d2569a57986]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 systemd[1]: var-lib-containers-storage-overlay-eeaddeef257724eec3c8b9a705aba8593d65d17ac517973e2a1d115eaa46e64f-merged.mount: Deactivated successfully.
Oct 10 23:48:17 np0005480824 NetworkManager[44969]: <info>  [1760154497.3840] manager: (tapa3ff9216-20): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Oct 10 23:48:17 np0005480824 systemd-udevd[270336]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.383 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d109e1fc-d2bc-485c-a89b-0f1c4a7a2478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 podman[270192]: 2025-10-11 03:48:17.412117394 +0000 UTC m=+1.465251743 container remove 02ad5623394b991b16ab671fd91bfa4c9c69c66a5b7f738a1561b9673775c207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.425 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[27018f89-20f3-428c-a2b1-36935580c693]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 systemd[1]: libpod-conmon-02ad5623394b991b16ab671fd91bfa4c9c69c66a5b7f738a1561b9673775c207.scope: Deactivated successfully.
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.429 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[48aef900-2506-4726-9d70-69456bda009c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 NetworkManager[44969]: <info>  [1760154497.4528] device (tapa3ff9216-20): carrier: link connected
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.460 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[161e3fdd-214d-4611-8921-083564b79a24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.482 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[27c69025-5907-4136-8f88-1e4df9a42690]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ff9216-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:b9:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396511, 'reachable_time': 30936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270380, 'error': None, 'target': 'ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.501 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[5442edf8-42bf-4345-b7a8-e592cdb20bcf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:b98e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396511, 'tstamp': 396511}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270396, 'error': None, 'target': 'ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.521 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ed92f6d3-d38d-4f61-9790-f4f5c01d14be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ff9216-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:b9:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396511, 'reachable_time': 30936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270399, 'error': None, 'target': 'ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.549 2 DEBUG nova.compute.manager [req-c77781bd-a79c-41e0-9f85-fa46f1d9c810 req-d7403cf4-e56f-47d4-83aa-55e19519e034 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Received event network-vif-plugged-e93e2cc3-a047-4935-9a80-20828ed03219 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.550 2 DEBUG oslo_concurrency.lockutils [req-c77781bd-a79c-41e0-9f85-fa46f1d9c810 req-d7403cf4-e56f-47d4-83aa-55e19519e034 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "95bebf7e-4285-4364-8951-6f2305250e86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.550 2 DEBUG oslo_concurrency.lockutils [req-c77781bd-a79c-41e0-9f85-fa46f1d9c810 req-d7403cf4-e56f-47d4-83aa-55e19519e034 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "95bebf7e-4285-4364-8951-6f2305250e86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.550 2 DEBUG oslo_concurrency.lockutils [req-c77781bd-a79c-41e0-9f85-fa46f1d9c810 req-d7403cf4-e56f-47d4-83aa-55e19519e034 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "95bebf7e-4285-4364-8951-6f2305250e86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.551 2 DEBUG nova.compute.manager [req-c77781bd-a79c-41e0-9f85-fa46f1d9c810 req-d7403cf4-e56f-47d4-83aa-55e19519e034 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Processing event network-vif-plugged-e93e2cc3-a047-4935-9a80-20828ed03219 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.562 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[537a6746-3d70-4968-83c5-6f53cf8d79d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.629 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c040d38f-a219-4f69-ba93-c1649f7af3bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.631 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ff9216-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.631 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.632 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ff9216-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:17 np0005480824 NetworkManager[44969]: <info>  [1760154497.6716] manager: (tapa3ff9216-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Oct 10 23:48:17 np0005480824 kernel: tapa3ff9216-20: entered promiscuous mode
Oct 10 23:48:17 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:17Z|00040|binding|INFO|Releasing lport 3db48d68-d901-4c48-99b4-f4d00f214317 from this chassis (sb_readonly=0)
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.674 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3ff9216-20, col_values=(('external_ids', {'iface-id': '3db48d68-d901-4c48-99b4-f4d00f214317'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.701 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3ff9216-2127-4ae5-9ba6-73e4685bcdc7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3ff9216-2127-4ae5-9ba6-73e4685bcdc7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.703 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7b053508-0676-4f67-899b-d9817ddb9ccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.704 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/a3ff9216-2127-4ae5-9ba6-73e4685bcdc7.pid.haproxy
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID a3ff9216-2127-4ae5-9ba6-73e4685bcdc7
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:48:17 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:17.704 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'env', 'PROCESS_TAG=haproxy-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3ff9216-2127-4ae5-9ba6-73e4685bcdc7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.849 2 DEBUG nova.network.neutron [req-54d5a4f4-bae0-425f-b3a9-54746ee48b5e req-077ebbe7-33ac-43e7-b525-3e693fef47ac 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Updated VIF entry in instance network info cache for port e93e2cc3-a047-4935-9a80-20828ed03219. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.850 2 DEBUG nova.network.neutron [req-54d5a4f4-bae0-425f-b3a9-54746ee48b5e req-077ebbe7-33ac-43e7-b525-3e693fef47ac 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Updating instance_info_cache with network_info: [{"id": "e93e2cc3-a047-4935-9a80-20828ed03219", "address": "fa:16:3e:9d:c7:eb", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape93e2cc3-a0", "ovs_interfaceid": "e93e2cc3-a047-4935-9a80-20828ed03219", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:48:17 np0005480824 nova_compute[260089]: 2025-10-11 03:48:17.867 2 DEBUG oslo_concurrency.lockutils [req-54d5a4f4-bae0-425f-b3a9-54746ee48b5e req-077ebbe7-33ac-43e7-b525-3e693fef47ac 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-95bebf7e-4285-4364-8951-6f2305250e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:48:18 np0005480824 podman[270582]: 2025-10-11 03:48:18.119740005 +0000 UTC m=+0.064395110 container create 96da4644fb6dfe30f9626a8e001d11ae2eb1fabba218365bcde838ebe6e095fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:48:18 np0005480824 systemd[1]: Started libpod-conmon-96da4644fb6dfe30f9626a8e001d11ae2eb1fabba218365bcde838ebe6e095fd.scope.
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.160 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154498.1597857, 95bebf7e-4285-4364-8951-6f2305250e86 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.161 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] VM Started (Lifecycle Event)#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.166 2 DEBUG nova.compute.manager [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.183 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:48:18 np0005480824 podman[270602]: 2025-10-11 03:48:18.185785343 +0000 UTC m=+0.068816014 container create 716a5365207fc09d1de9e4413b2aaeedf23eaad98a5b18f839cbd9c5f12f94d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_visvesvaraya, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:48:18 np0005480824 podman[270582]: 2025-10-11 03:48:18.094110261 +0000 UTC m=+0.038765396 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.189 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:18 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:48:18 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af94a27804c5413578cdcedb48dd1ce02f764d46372bac23033d9231bcf2f2a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.207 2 INFO nova.virt.libvirt.driver [-] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Instance spawned successfully.#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.208 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:48:18 np0005480824 podman[270582]: 2025-10-11 03:48:18.213780583 +0000 UTC m=+0.158435708 container init 96da4644fb6dfe30f9626a8e001d11ae2eb1fabba218365bcde838ebe6e095fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.213 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:48:18 np0005480824 podman[270582]: 2025-10-11 03:48:18.221154328 +0000 UTC m=+0.165809433 container start 96da4644fb6dfe30f9626a8e001d11ae2eb1fabba218365bcde838ebe6e095fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:48:18 np0005480824 systemd[1]: Started libpod-conmon-716a5365207fc09d1de9e4413b2aaeedf23eaad98a5b18f839cbd9c5f12f94d9.scope.
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.238 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.238 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154498.1599844, 95bebf7e-4285-4364-8951-6f2305250e86 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.238 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:48:18 np0005480824 podman[270610]: 2025-10-11 03:48:18.242431149 +0000 UTC m=+0.083338427 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.244 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.245 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.245 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.245 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.246 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.246 2 DEBUG nova.virt.libvirt.driver [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:18 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[270621]: [NOTICE]   (270653) : New worker (270663) forked
Oct 10 23:48:18 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[270621]: [NOTICE]   (270653) : Loading success.
Oct 10 23:48:18 np0005480824 podman[270602]: 2025-10-11 03:48:18.166054327 +0000 UTC m=+0.049085018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:48:18 np0005480824 podman[270613]: 2025-10-11 03:48:18.269293713 +0000 UTC m=+0.105926800 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 10 23:48:18 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.276 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.281 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154498.174238, 95bebf7e-4285-4364-8951-6f2305250e86 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.281 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:48:18 np0005480824 podman[270602]: 2025-10-11 03:48:18.293877732 +0000 UTC m=+0.176908453 container init 716a5365207fc09d1de9e4413b2aaeedf23eaad98a5b18f839cbd9c5f12f94d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_visvesvaraya, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:48:18 np0005480824 podman[270602]: 2025-10-11 03:48:18.301928132 +0000 UTC m=+0.184958823 container start 716a5365207fc09d1de9e4413b2aaeedf23eaad98a5b18f839cbd9c5f12f94d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 23:48:18 np0005480824 podman[270602]: 2025-10-11 03:48:18.305333283 +0000 UTC m=+0.188363954 container attach 716a5365207fc09d1de9e4413b2aaeedf23eaad98a5b18f839cbd9c5f12f94d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_visvesvaraya, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 23:48:18 np0005480824 great_visvesvaraya[270661]: 167 167
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.307 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:18 np0005480824 systemd[1]: libpod-716a5365207fc09d1de9e4413b2aaeedf23eaad98a5b18f839cbd9c5f12f94d9.scope: Deactivated successfully.
Oct 10 23:48:18 np0005480824 conmon[270661]: conmon 716a5365207fc09d1de9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-716a5365207fc09d1de9e4413b2aaeedf23eaad98a5b18f839cbd9c5f12f94d9.scope/container/memory.events
Oct 10 23:48:18 np0005480824 podman[270602]: 2025-10-11 03:48:18.310342031 +0000 UTC m=+0.193372712 container died 716a5365207fc09d1de9e4413b2aaeedf23eaad98a5b18f839cbd9c5f12f94d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_visvesvaraya, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.313 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.316 2 INFO nova.compute.manager [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Took 9.20 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.317 2 DEBUG nova.compute.manager [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:18 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6eee22378d002f2493f08fd9ef86be7047d47544c5c7409904b8e4f5b5fd4504-merged.mount: Deactivated successfully.
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.346 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:48:18 np0005480824 podman[270602]: 2025-10-11 03:48:18.359852589 +0000 UTC m=+0.242883270 container remove 716a5365207fc09d1de9e4413b2aaeedf23eaad98a5b18f839cbd9c5f12f94d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:48:18 np0005480824 systemd[1]: libpod-conmon-716a5365207fc09d1de9e4413b2aaeedf23eaad98a5b18f839cbd9c5f12f94d9.scope: Deactivated successfully.
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.374 2 INFO nova.compute.manager [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Took 10.13 seconds to build instance.#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.390 2 DEBUG oslo_concurrency.lockutils [None req-072d7217-5973-4821-9f73-5177c88ffc81 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "95bebf7e-4285-4364-8951-6f2305250e86" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:18 np0005480824 nova_compute[260089]: 2025-10-11 03:48:18.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 3.6 MiB/s wr, 93 op/s
Oct 10 23:48:18 np0005480824 podman[270695]: 2025-10-11 03:48:18.572544805 +0000 UTC m=+0.057144458 container create ee8f21815d49799e55afe5f778778b5c287a4eb9ee6304c5ca7bc4f399508dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 10 23:48:18 np0005480824 systemd[1]: Started libpod-conmon-ee8f21815d49799e55afe5f778778b5c287a4eb9ee6304c5ca7bc4f399508dd2.scope.
Oct 10 23:48:18 np0005480824 podman[270695]: 2025-10-11 03:48:18.556267682 +0000 UTC m=+0.040867335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:48:18 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:48:18 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724765beac81a95916d969dd679995807fce062eb90c54755c4e79bd3a0df5d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:18 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724765beac81a95916d969dd679995807fce062eb90c54755c4e79bd3a0df5d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:18 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724765beac81a95916d969dd679995807fce062eb90c54755c4e79bd3a0df5d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:18 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724765beac81a95916d969dd679995807fce062eb90c54755c4e79bd3a0df5d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:18 np0005480824 podman[270695]: 2025-10-11 03:48:18.709559277 +0000 UTC m=+0.194158970 container init ee8f21815d49799e55afe5f778778b5c287a4eb9ee6304c5ca7bc4f399508dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:48:18 np0005480824 podman[270695]: 2025-10-11 03:48:18.727248165 +0000 UTC m=+0.211847858 container start ee8f21815d49799e55afe5f778778b5c287a4eb9ee6304c5ca7bc4f399508dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:48:18 np0005480824 podman[270695]: 2025-10-11 03:48:18.733583254 +0000 UTC m=+0.218182947 container attach ee8f21815d49799e55afe5f778778b5c287a4eb9ee6304c5ca7bc4f399508dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wing, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:48:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]: {
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:    "0": [
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:        {
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "devices": [
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "/dev/loop3"
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            ],
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_name": "ceph_lv0",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_size": "21470642176",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "name": "ceph_lv0",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "tags": {
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.cluster_name": "ceph",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.crush_device_class": "",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.encrypted": "0",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.osd_id": "0",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.type": "block",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.vdo": "0"
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            },
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "type": "block",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "vg_name": "ceph_vg0"
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:        }
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:    ],
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:    "1": [
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:        {
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "devices": [
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "/dev/loop4"
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            ],
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_name": "ceph_lv1",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_size": "21470642176",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "name": "ceph_lv1",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "tags": {
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.cluster_name": "ceph",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.crush_device_class": "",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.encrypted": "0",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.osd_id": "1",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.type": "block",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.vdo": "0"
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            },
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "type": "block",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "vg_name": "ceph_vg1"
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:        }
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:    ],
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:    "2": [
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:        {
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "devices": [
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "/dev/loop5"
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            ],
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_name": "ceph_lv2",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_size": "21470642176",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "name": "ceph_lv2",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "tags": {
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.cluster_name": "ceph",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.crush_device_class": "",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.encrypted": "0",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.osd_id": "2",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.type": "block",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:                "ceph.vdo": "0"
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            },
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "type": "block",
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:            "vg_name": "ceph_vg2"
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:        }
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]:    ]
Oct 10 23:48:19 np0005480824 relaxed_wing[270712]: }
Oct 10 23:48:19 np0005480824 systemd[1]: libpod-ee8f21815d49799e55afe5f778778b5c287a4eb9ee6304c5ca7bc4f399508dd2.scope: Deactivated successfully.
Oct 10 23:48:19 np0005480824 podman[270695]: 2025-10-11 03:48:19.590555888 +0000 UTC m=+1.075155561 container died ee8f21815d49799e55afe5f778778b5c287a4eb9ee6304c5ca7bc4f399508dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:48:19 np0005480824 systemd[1]: var-lib-containers-storage-overlay-724765beac81a95916d969dd679995807fce062eb90c54755c4e79bd3a0df5d4-merged.mount: Deactivated successfully.
Oct 10 23:48:19 np0005480824 nova_compute[260089]: 2025-10-11 03:48:19.636 2 DEBUG nova.compute.manager [req-8d12de05-6e16-4102-b7eb-05fa26103263 req-ca1dfbdf-7eb0-44ec-91be-28eef43073e7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Received event network-vif-plugged-e93e2cc3-a047-4935-9a80-20828ed03219 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:19 np0005480824 nova_compute[260089]: 2025-10-11 03:48:19.637 2 DEBUG oslo_concurrency.lockutils [req-8d12de05-6e16-4102-b7eb-05fa26103263 req-ca1dfbdf-7eb0-44ec-91be-28eef43073e7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "95bebf7e-4285-4364-8951-6f2305250e86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:19 np0005480824 nova_compute[260089]: 2025-10-11 03:48:19.637 2 DEBUG oslo_concurrency.lockutils [req-8d12de05-6e16-4102-b7eb-05fa26103263 req-ca1dfbdf-7eb0-44ec-91be-28eef43073e7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "95bebf7e-4285-4364-8951-6f2305250e86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:19 np0005480824 nova_compute[260089]: 2025-10-11 03:48:19.637 2 DEBUG oslo_concurrency.lockutils [req-8d12de05-6e16-4102-b7eb-05fa26103263 req-ca1dfbdf-7eb0-44ec-91be-28eef43073e7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "95bebf7e-4285-4364-8951-6f2305250e86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:19 np0005480824 nova_compute[260089]: 2025-10-11 03:48:19.637 2 DEBUG nova.compute.manager [req-8d12de05-6e16-4102-b7eb-05fa26103263 req-ca1dfbdf-7eb0-44ec-91be-28eef43073e7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] No waiting events found dispatching network-vif-plugged-e93e2cc3-a047-4935-9a80-20828ed03219 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:48:19 np0005480824 nova_compute[260089]: 2025-10-11 03:48:19.637 2 WARNING nova.compute.manager [req-8d12de05-6e16-4102-b7eb-05fa26103263 req-ca1dfbdf-7eb0-44ec-91be-28eef43073e7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Received unexpected event network-vif-plugged-e93e2cc3-a047-4935-9a80-20828ed03219 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:48:19 np0005480824 podman[270695]: 2025-10-11 03:48:19.662029604 +0000 UTC m=+1.146629267 container remove ee8f21815d49799e55afe5f778778b5c287a4eb9ee6304c5ca7bc4f399508dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:48:19 np0005480824 systemd[1]: libpod-conmon-ee8f21815d49799e55afe5f778778b5c287a4eb9ee6304c5ca7bc4f399508dd2.scope: Deactivated successfully.
Oct 10 23:48:20 np0005480824 podman[270878]: 2025-10-11 03:48:20.412996137 +0000 UTC m=+0.059814692 container create 8e27e41a4eb9fee708a384c6aa5193ffe4df4d940b4bebfd4b7d2238e25063df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 10 23:48:20 np0005480824 systemd[1]: Started libpod-conmon-8e27e41a4eb9fee708a384c6aa5193ffe4df4d940b4bebfd4b7d2238e25063df.scope.
Oct 10 23:48:20 np0005480824 podman[270878]: 2025-10-11 03:48:20.384651359 +0000 UTC m=+0.031469994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:48:20 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:48:20 np0005480824 podman[270878]: 2025-10-11 03:48:20.519395837 +0000 UTC m=+0.166214472 container init 8e27e41a4eb9fee708a384c6aa5193ffe4df4d940b4bebfd4b7d2238e25063df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:48:20 np0005480824 podman[270878]: 2025-10-11 03:48:20.53305536 +0000 UTC m=+0.179873945 container start 8e27e41a4eb9fee708a384c6aa5193ffe4df4d940b4bebfd4b7d2238e25063df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:48:20 np0005480824 podman[270878]: 2025-10-11 03:48:20.537549275 +0000 UTC m=+0.184367900 container attach 8e27e41a4eb9fee708a384c6aa5193ffe4df4d940b4bebfd4b7d2238e25063df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_carson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 10 23:48:20 np0005480824 hopeful_carson[270894]: 167 167
Oct 10 23:48:20 np0005480824 systemd[1]: libpod-8e27e41a4eb9fee708a384c6aa5193ffe4df4d940b4bebfd4b7d2238e25063df.scope: Deactivated successfully.
Oct 10 23:48:20 np0005480824 conmon[270894]: conmon 8e27e41a4eb9fee708a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e27e41a4eb9fee708a384c6aa5193ffe4df4d940b4bebfd4b7d2238e25063df.scope/container/memory.events
Oct 10 23:48:20 np0005480824 podman[270878]: 2025-10-11 03:48:20.543556247 +0000 UTC m=+0.190374822 container died 8e27e41a4eb9fee708a384c6aa5193ffe4df4d940b4bebfd4b7d2238e25063df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_carson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:48:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Oct 10 23:48:20 np0005480824 systemd[1]: var-lib-containers-storage-overlay-eac485e88dce6778e439f49e4d243a29f52c7ae3ce03ccffb4ee4322e59c8a5b-merged.mount: Deactivated successfully.
Oct 10 23:48:20 np0005480824 podman[270878]: 2025-10-11 03:48:20.592032911 +0000 UTC m=+0.238851486 container remove 8e27e41a4eb9fee708a384c6aa5193ffe4df4d940b4bebfd4b7d2238e25063df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_carson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:48:20 np0005480824 systemd[1]: libpod-conmon-8e27e41a4eb9fee708a384c6aa5193ffe4df4d940b4bebfd4b7d2238e25063df.scope: Deactivated successfully.
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.634 2 DEBUG oslo_concurrency.lockutils [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "95bebf7e-4285-4364-8951-6f2305250e86" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.635 2 DEBUG oslo_concurrency.lockutils [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "95bebf7e-4285-4364-8951-6f2305250e86" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.635 2 DEBUG oslo_concurrency.lockutils [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "95bebf7e-4285-4364-8951-6f2305250e86-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.636 2 DEBUG oslo_concurrency.lockutils [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "95bebf7e-4285-4364-8951-6f2305250e86-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.636 2 DEBUG oslo_concurrency.lockutils [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "95bebf7e-4285-4364-8951-6f2305250e86-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.638 2 INFO nova.compute.manager [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Terminating instance#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.641 2 DEBUG nova.compute.manager [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:48:20 np0005480824 kernel: tape93e2cc3-a0 (unregistering): left promiscuous mode
Oct 10 23:48:20 np0005480824 NetworkManager[44969]: <info>  [1760154500.6955] device (tape93e2cc3-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:48:20 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:20Z|00041|binding|INFO|Releasing lport e93e2cc3-a047-4935-9a80-20828ed03219 from this chassis (sb_readonly=0)
Oct 10 23:48:20 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:20Z|00042|binding|INFO|Setting lport e93e2cc3-a047-4935-9a80-20828ed03219 down in Southbound
Oct 10 23:48:20 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:20Z|00043|binding|INFO|Removing iface tape93e2cc3-a0 ovn-installed in OVS
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:20.766 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:c7:eb 10.100.0.7'], port_security=['fa:16:3e:9d:c7:eb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '95bebf7e-4285-4364-8951-6f2305250e86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8004df44ba5045b6b3c7b5376587d790', 'neutron:revision_number': '4', 'neutron:security_group_ids': '431a4db7-da14-43f2-a384-76fa67eaa106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aab757e7-f51f-43c8-93d5-f3007ce26421, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=e93e2cc3-a047-4935-9a80-20828ed03219) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:48:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:20.767 162245 INFO neutron.agent.ovn.metadata.agent [-] Port e93e2cc3-a047-4935-9a80-20828ed03219 in datapath a3ff9216-2127-4ae5-9ba6-73e4685bcdc7 unbound from our chassis#033[00m
Oct 10 23:48:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:20.768 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3ff9216-2127-4ae5-9ba6-73e4685bcdc7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:48:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:20.769 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3e3d21-53d0-4b4e-adb9-4c8275575e4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:20.770 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7 namespace which is not needed anymore#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:20 np0005480824 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct 10 23:48:20 np0005480824 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 3.214s CPU time.
Oct 10 23:48:20 np0005480824 systemd-machined[215071]: Machine qemu-2-instance-00000002 terminated.
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.884 2 INFO nova.virt.libvirt.driver [-] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Instance destroyed successfully.#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.885 2 DEBUG nova.objects.instance [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lazy-loading 'resources' on Instance uuid 95bebf7e-4285-4364-8951-6f2305250e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:48:20 np0005480824 podman[270927]: 2025-10-11 03:48:20.886848124 +0000 UTC m=+0.055414728 container create d64480ac43f98392f57be9e36c2c97757e38e298e5e6447719b541c1051f50bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.901 2 DEBUG nova.virt.libvirt.vif [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:48:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1927354948',display_name='tempest-VolumesActionsTest-instance-1927354948',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1927354948',id=2,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:48:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8004df44ba5045b6b3c7b5376587d790',ramdisk_id='',reservation_id='r-g0x57fch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1212390094',owner_user_name='tempest-VolumesActionsTest-1212390094-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:48:18Z,user_data=None,user_id='94437cd815c640b094db68c0d14ae5c0',uuid=95bebf7e-4285-4364-8951-6f2305250e86,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e93e2cc3-a047-4935-9a80-20828ed03219", "address": "fa:16:3e:9d:c7:eb", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape93e2cc3-a0", "ovs_interfaceid": "e93e2cc3-a047-4935-9a80-20828ed03219", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.902 2 DEBUG nova.network.os_vif_util [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Converting VIF {"id": "e93e2cc3-a047-4935-9a80-20828ed03219", "address": "fa:16:3e:9d:c7:eb", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape93e2cc3-a0", "ovs_interfaceid": "e93e2cc3-a047-4935-9a80-20828ed03219", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.903 2 DEBUG nova.network.os_vif_util [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:c7:eb,bridge_name='br-int',has_traffic_filtering=True,id=e93e2cc3-a047-4935-9a80-20828ed03219,network=Network(a3ff9216-2127-4ae5-9ba6-73e4685bcdc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape93e2cc3-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.904 2 DEBUG os_vif [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:c7:eb,bridge_name='br-int',has_traffic_filtering=True,id=e93e2cc3-a047-4935-9a80-20828ed03219,network=Network(a3ff9216-2127-4ae5-9ba6-73e4685bcdc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape93e2cc3-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape93e2cc3-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:48:20 np0005480824 nova_compute[260089]: 2025-10-11 03:48:20.918 2 INFO os_vif [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:c7:eb,bridge_name='br-int',has_traffic_filtering=True,id=e93e2cc3-a047-4935-9a80-20828ed03219,network=Network(a3ff9216-2127-4ae5-9ba6-73e4685bcdc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape93e2cc3-a0')#033[00m
Oct 10 23:48:20 np0005480824 systemd[1]: Started libpod-conmon-d64480ac43f98392f57be9e36c2c97757e38e298e5e6447719b541c1051f50bf.scope.
Oct 10 23:48:20 np0005480824 podman[270927]: 2025-10-11 03:48:20.859267184 +0000 UTC m=+0.027833858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:48:20 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:48:20 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[270621]: [NOTICE]   (270653) : haproxy version is 2.8.14-c23fe91
Oct 10 23:48:20 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[270621]: [NOTICE]   (270653) : path to executable is /usr/sbin/haproxy
Oct 10 23:48:20 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[270621]: [WARNING]  (270653) : Exiting Master process...
Oct 10 23:48:20 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[270621]: [WARNING]  (270653) : Exiting Master process...
Oct 10 23:48:20 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[270621]: [ALERT]    (270653) : Current worker (270663) exited with code 143 (Terminated)
Oct 10 23:48:20 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[270621]: [WARNING]  (270653) : All workers exited. Exiting... (0)
Oct 10 23:48:20 np0005480824 systemd[1]: libpod-96da4644fb6dfe30f9626a8e001d11ae2eb1fabba218365bcde838ebe6e095fd.scope: Deactivated successfully.
Oct 10 23:48:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/532cfd39f19b5b85615f820e633c052634066b1addfe9843f5574489102b33c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/532cfd39f19b5b85615f820e633c052634066b1addfe9843f5574489102b33c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/532cfd39f19b5b85615f820e633c052634066b1addfe9843f5574489102b33c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:20 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/532cfd39f19b5b85615f820e633c052634066b1addfe9843f5574489102b33c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:20 np0005480824 podman[270957]: 2025-10-11 03:48:20.984044417 +0000 UTC m=+0.076369633 container died 96da4644fb6dfe30f9626a8e001d11ae2eb1fabba218365bcde838ebe6e095fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 23:48:21 np0005480824 podman[270927]: 2025-10-11 03:48:21.000557847 +0000 UTC m=+0.169124451 container init d64480ac43f98392f57be9e36c2c97757e38e298e5e6447719b541c1051f50bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:48:21 np0005480824 podman[270927]: 2025-10-11 03:48:21.011684909 +0000 UTC m=+0.180251503 container start d64480ac43f98392f57be9e36c2c97757e38e298e5e6447719b541c1051f50bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 23:48:21 np0005480824 podman[270927]: 2025-10-11 03:48:21.015655202 +0000 UTC m=+0.184221796 container attach d64480ac43f98392f57be9e36c2c97757e38e298e5e6447719b541c1051f50bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 10 23:48:21 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96da4644fb6dfe30f9626a8e001d11ae2eb1fabba218365bcde838ebe6e095fd-userdata-shm.mount: Deactivated successfully.
Oct 10 23:48:21 np0005480824 systemd[1]: var-lib-containers-storage-overlay-af94a27804c5413578cdcedb48dd1ce02f764d46372bac23033d9231bcf2f2a9-merged.mount: Deactivated successfully.
Oct 10 23:48:21 np0005480824 podman[270957]: 2025-10-11 03:48:21.034361534 +0000 UTC m=+0.126686750 container cleanup 96da4644fb6dfe30f9626a8e001d11ae2eb1fabba218365bcde838ebe6e095fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:48:21 np0005480824 systemd[1]: libpod-conmon-96da4644fb6dfe30f9626a8e001d11ae2eb1fabba218365bcde838ebe6e095fd.scope: Deactivated successfully.
Oct 10 23:48:21 np0005480824 podman[271021]: 2025-10-11 03:48:21.147942373 +0000 UTC m=+0.072491141 container remove 96da4644fb6dfe30f9626a8e001d11ae2eb1fabba218365bcde838ebe6e095fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 10 23:48:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:21.155 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[0708d736-555d-4fd1-a86f-5e072f1ca86d]: (4, ('Sat Oct 11 03:48:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7 (96da4644fb6dfe30f9626a8e001d11ae2eb1fabba218365bcde838ebe6e095fd)\n96da4644fb6dfe30f9626a8e001d11ae2eb1fabba218365bcde838ebe6e095fd\nSat Oct 11 03:48:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7 (96da4644fb6dfe30f9626a8e001d11ae2eb1fabba218365bcde838ebe6e095fd)\n96da4644fb6dfe30f9626a8e001d11ae2eb1fabba218365bcde838ebe6e095fd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:21.159 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[0a992931-86df-4dc2-9545-0cc880b4631c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:21.160 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ff9216-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:21 np0005480824 nova_compute[260089]: 2025-10-11 03:48:21.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:21 np0005480824 kernel: tapa3ff9216-20: left promiscuous mode
Oct 10 23:48:21 np0005480824 nova_compute[260089]: 2025-10-11 03:48:21.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:21.181 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[5e0256b2-fd9c-4af3-bed8-5c3c33d88b06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:21.203 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[34bba873-a2c6-445a-950d-15615b8db0c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:21.205 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[34d4b341-0caf-4fd7-a0dc-e7bcbb5bad81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:21.222 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[3efdfdbd-bea6-472d-aaa1-39260a479c73]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396502, 'reachable_time': 16547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271035, 'error': None, 'target': 'ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:21 np0005480824 systemd[1]: run-netns-ovnmeta\x2da3ff9216\x2d2127\x2d4ae5\x2d9ba6\x2d73e4685bcdc7.mount: Deactivated successfully.
Oct 10 23:48:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:21.233 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:48:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:21.233 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[436b8dff-e1e5-4e72-a62e-953ced07c086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:21 np0005480824 nova_compute[260089]: 2025-10-11 03:48:21.378 2 INFO nova.virt.libvirt.driver [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Deleting instance files /var/lib/nova/instances/95bebf7e-4285-4364-8951-6f2305250e86_del#033[00m
Oct 10 23:48:21 np0005480824 nova_compute[260089]: 2025-10-11 03:48:21.379 2 INFO nova.virt.libvirt.driver [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Deletion of /var/lib/nova/instances/95bebf7e-4285-4364-8951-6f2305250e86_del complete#033[00m
Oct 10 23:48:21 np0005480824 nova_compute[260089]: 2025-10-11 03:48:21.547 2 INFO nova.compute.manager [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Took 0.91 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:48:21 np0005480824 nova_compute[260089]: 2025-10-11 03:48:21.547 2 DEBUG oslo.service.loopingcall [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:48:21 np0005480824 nova_compute[260089]: 2025-10-11 03:48:21.548 2 DEBUG nova.compute.manager [-] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:48:21 np0005480824 nova_compute[260089]: 2025-10-11 03:48:21.549 2 DEBUG nova.network.neutron [-] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:48:22 np0005480824 kind_tu[270987]: {
Oct 10 23:48:22 np0005480824 kind_tu[270987]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "osd_id": 0,
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "type": "bluestore"
Oct 10 23:48:22 np0005480824 kind_tu[270987]:    },
Oct 10 23:48:22 np0005480824 kind_tu[270987]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "osd_id": 1,
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "type": "bluestore"
Oct 10 23:48:22 np0005480824 kind_tu[270987]:    },
Oct 10 23:48:22 np0005480824 kind_tu[270987]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "osd_id": 2,
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:48:22 np0005480824 kind_tu[270987]:        "type": "bluestore"
Oct 10 23:48:22 np0005480824 kind_tu[270987]:    }
Oct 10 23:48:22 np0005480824 kind_tu[270987]: }
Oct 10 23:48:22 np0005480824 systemd[1]: libpod-d64480ac43f98392f57be9e36c2c97757e38e298e5e6447719b541c1051f50bf.scope: Deactivated successfully.
Oct 10 23:48:22 np0005480824 systemd[1]: libpod-d64480ac43f98392f57be9e36c2c97757e38e298e5e6447719b541c1051f50bf.scope: Consumed 1.157s CPU time.
Oct 10 23:48:22 np0005480824 conmon[270987]: conmon d64480ac43f98392f57b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d64480ac43f98392f57be9e36c2c97757e38e298e5e6447719b541c1051f50bf.scope/container/memory.events
Oct 10 23:48:22 np0005480824 podman[270927]: 2025-10-11 03:48:22.197165221 +0000 UTC m=+1.365731855 container died d64480ac43f98392f57be9e36c2c97757e38e298e5e6447719b541c1051f50bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:48:22 np0005480824 systemd[1]: var-lib-containers-storage-overlay-532cfd39f19b5b85615f820e633c052634066b1addfe9843f5574489102b33c4-merged.mount: Deactivated successfully.
Oct 10 23:48:22 np0005480824 podman[270927]: 2025-10-11 03:48:22.263376404 +0000 UTC m=+1.431943008 container remove d64480ac43f98392f57be9e36c2c97757e38e298e5e6447719b541c1051f50bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:48:22 np0005480824 systemd[1]: libpod-conmon-d64480ac43f98392f57be9e36c2c97757e38e298e5e6447719b541c1051f50bf.scope: Deactivated successfully.
Oct 10 23:48:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:48:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:48:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:48:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:48:22 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 110a5402-9c69-4176-9e04-60280d038637 does not exist
Oct 10 23:48:22 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 78d3f86b-8ae5-4360-ac4d-6a71ff0273c1 does not exist
Oct 10 23:48:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:22.533 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:48:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:22.535 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:48:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 66 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1001 KiB/s rd, 1.8 MiB/s wr, 121 op/s
Oct 10 23:48:22 np0005480824 nova_compute[260089]: 2025-10-11 03:48:22.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:22 np0005480824 nova_compute[260089]: 2025-10-11 03:48:22.596 2 DEBUG nova.network.neutron [-] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:48:22 np0005480824 nova_compute[260089]: 2025-10-11 03:48:22.639 2 INFO nova.compute.manager [-] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Took 1.09 seconds to deallocate network for instance.#033[00m
Oct 10 23:48:22 np0005480824 nova_compute[260089]: 2025-10-11 03:48:22.676 2 DEBUG nova.compute.manager [req-aa4a6f22-91d9-43ef-8d10-fb598759cd2f req-9195844b-01a9-484f-8f8c-f8c6db05a43b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Received event network-vif-deleted-e93e2cc3-a047-4935-9a80-20828ed03219 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:22 np0005480824 nova_compute[260089]: 2025-10-11 03:48:22.687 2 DEBUG oslo_concurrency.lockutils [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:22 np0005480824 nova_compute[260089]: 2025-10-11 03:48:22.688 2 DEBUG oslo_concurrency.lockutils [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:22 np0005480824 nova_compute[260089]: 2025-10-11 03:48:22.764 2 DEBUG oslo_concurrency.processutils [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:48:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2330128805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:48:23 np0005480824 nova_compute[260089]: 2025-10-11 03:48:23.201 2 DEBUG oslo_concurrency.processutils [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:23 np0005480824 nova_compute[260089]: 2025-10-11 03:48:23.212 2 DEBUG nova.compute.provider_tree [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:48:23 np0005480824 nova_compute[260089]: 2025-10-11 03:48:23.229 2 DEBUG nova.scheduler.client.report [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:48:23 np0005480824 nova_compute[260089]: 2025-10-11 03:48:23.257 2 DEBUG oslo_concurrency.lockutils [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:23 np0005480824 nova_compute[260089]: 2025-10-11 03:48:23.283 2 INFO nova.scheduler.client.report [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Deleted allocations for instance 95bebf7e-4285-4364-8951-6f2305250e86#033[00m
Oct 10 23:48:23 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:48:23 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:48:23 np0005480824 nova_compute[260089]: 2025-10-11 03:48:23.340 2 DEBUG oslo_concurrency.lockutils [None req-5726b4b8-3e2a-467d-ae1e-1ce170eb419b 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "95bebf7e-4285-4364-8951-6f2305250e86" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:23 np0005480824 nova_compute[260089]: 2025-10-11 03:48:23.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:48:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:48:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/854004832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:48:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:48:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/854004832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:48:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1017 KiB/s wr, 115 op/s
Oct 10 23:48:25 np0005480824 nova_compute[260089]: 2025-10-11 03:48:25.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:26 np0005480824 nova_compute[260089]: 2025-10-11 03:48:26.039 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:26 np0005480824 nova_compute[260089]: 2025-10-11 03:48:26.040 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:26 np0005480824 nova_compute[260089]: 2025-10-11 03:48:26.059 2 DEBUG nova.compute.manager [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:48:26 np0005480824 nova_compute[260089]: 2025-10-11 03:48:26.118 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:26 np0005480824 nova_compute[260089]: 2025-10-11 03:48:26.119 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:26 np0005480824 nova_compute[260089]: 2025-10-11 03:48:26.126 2 DEBUG nova.virt.hardware [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:48:26 np0005480824 nova_compute[260089]: 2025-10-11 03:48:26.126 2 INFO nova.compute.claims [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:48:26 np0005480824 nova_compute[260089]: 2025-10-11 03:48:26.250 2 DEBUG oslo_concurrency.processutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct 10 23:48:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct 10 23:48:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct 10 23:48:26 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct 10 23:48:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:48:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3139894113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:48:26 np0005480824 nova_compute[260089]: 2025-10-11 03:48:26.727 2 DEBUG oslo_concurrency.processutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:26 np0005480824 nova_compute[260089]: 2025-10-11 03:48:26.736 2 DEBUG nova.compute.provider_tree [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:48:27 np0005480824 podman[271174]: 2025-10-11 03:48:27.100030157 +0000 UTC m=+0.148832091 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.292 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.462 2 DEBUG nova.scheduler.client.report [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.590 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.591 2 DEBUG nova.compute.manager [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.656 2 DEBUG nova.compute.manager [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.657 2 DEBUG nova.network.neutron [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.689 2 INFO nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.704 2 DEBUG nova.compute.manager [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.824 2 DEBUG nova.compute.manager [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.825 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.826 2 INFO nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Creating image(s)#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.853 2 DEBUG nova.storage.rbd_utils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] rbd image 1c478ad7-214b-4e9c-be93-5b836a13b5f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:48:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.883 2 DEBUG nova.storage.rbd_utils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] rbd image 1c478ad7-214b-4e9c-be93-5b836a13b5f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:48:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:48:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:48:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:48:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:48:27
Oct 10 23:48:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:48:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:48:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['images', 'default.rgw.log', 'default.rgw.control', 'backups', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms']
Oct 10 23:48:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.914 2 DEBUG nova.storage.rbd_utils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] rbd image 1c478ad7-214b-4e9c-be93-5b836a13b5f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.918 2 DEBUG oslo_concurrency.processutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:27 np0005480824 nova_compute[260089]: 2025-10-11 03:48:27.949 2 DEBUG nova.policy [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '94437cd815c640b094db68c0d14ae5c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8004df44ba5045b6b3c7b5376587d790', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.007 2 DEBUG oslo_concurrency.processutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.008 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.009 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.009 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.041 2 DEBUG nova.storage.rbd_utils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] rbd image 1c478ad7-214b-4e9c-be93-5b836a13b5f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.048 2 DEBUG oslo_concurrency.processutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 1c478ad7-214b-4e9c-be93-5b836a13b5f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:48:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:48:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:48:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:48:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:48:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:48:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:48:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:48:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:48:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.358 2 DEBUG oslo_concurrency.processutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 1c478ad7-214b-4e9c-be93-5b836a13b5f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.416 2 DEBUG nova.storage.rbd_utils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] resizing rbd image 1c478ad7-214b-4e9c-be93-5b836a13b5f3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.515 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Acquiring lock "aefb31cf-337d-446e-a617-c82e2e9b4809" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.515 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.521 2 DEBUG nova.objects.instance [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lazy-loading 'migration_context' on Instance uuid 1c478ad7-214b-4e9c-be93-5b836a13b5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:48:28 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:28.538 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.543 2 DEBUG nova.compute.manager [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.548 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.548 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Ensure instance console log exists: /var/lib/nova/instances/1c478ad7-214b-4e9c-be93-5b836a13b5f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.549 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.549 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.549 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 KiB/s wr, 128 op/s
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.594 2 DEBUG nova.network.neutron [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Successfully created port: cb54de0c-b523-42e1-a4c5-0e3f477d8960 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.610 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.610 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.619 2 DEBUG nova.virt.hardware [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.619 2 INFO nova.compute.claims [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:48:28 np0005480824 nova_compute[260089]: 2025-10-11 03:48:28.751 2 DEBUG oslo_concurrency.processutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:48:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/696992725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.193 2 DEBUG oslo_concurrency.processutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.203 2 DEBUG nova.compute.provider_tree [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.228 2 DEBUG nova.scheduler.client.report [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.254 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.255 2 DEBUG nova.compute.manager [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.299 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.299 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.311 2 DEBUG nova.compute.manager [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.311 2 DEBUG nova.network.neutron [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.322 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.322 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.323 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.325 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.331 2 INFO nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.348 2 DEBUG nova.compute.manager [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.456 2 DEBUG nova.compute.manager [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.459 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.459 2 INFO nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Creating image(s)#033[00m
Oct 10 23:48:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.490 2 DEBUG nova.storage.rbd_utils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] rbd image aefb31cf-337d-446e-a617-c82e2e9b4809_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.521 2 DEBUG nova.storage.rbd_utils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] rbd image aefb31cf-337d-446e-a617-c82e2e9b4809_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.556 2 DEBUG nova.storage.rbd_utils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] rbd image aefb31cf-337d-446e-a617-c82e2e9b4809_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.560 2 DEBUG oslo_concurrency.processutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.614 2 DEBUG nova.policy [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2619b09d11614c958f6b7a5b9db7b922', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5825b55787104735a580132059839426', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.620 2 DEBUG oslo_concurrency.processutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.621 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.622 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.622 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.654 2 DEBUG nova.storage.rbd_utils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] rbd image aefb31cf-337d-446e-a617-c82e2e9b4809_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.659 2 DEBUG oslo_concurrency.processutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e aefb31cf-337d-446e-a617-c82e2e9b4809_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:48:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3910894025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:48:29 np0005480824 nova_compute[260089]: 2025-10-11 03:48:29.938 2 DEBUG oslo_concurrency.processutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e aefb31cf-337d-446e-a617-c82e2e9b4809_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.021 2 DEBUG nova.storage.rbd_utils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] resizing rbd image aefb31cf-337d-446e-a617-c82e2e9b4809_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.141 2 DEBUG nova.objects.instance [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lazy-loading 'migration_context' on Instance uuid aefb31cf-337d-446e-a617-c82e2e9b4809 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.154 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.155 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Ensure instance console log exists: /var/lib/nova/instances/aefb31cf-337d-446e-a617-c82e2e9b4809/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.155 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.155 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.156 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.372 2 DEBUG nova.network.neutron [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Successfully updated port: cb54de0c-b523-42e1-a4c5-0e3f477d8960 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.389 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "refresh_cache-1c478ad7-214b-4e9c-be93-5b836a13b5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.389 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquired lock "refresh_cache-1c478ad7-214b-4e9c-be93-5b836a13b5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.389 2 DEBUG nova.network.neutron [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.498 2 DEBUG nova.network.neutron [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Successfully created port: 66fcb192-9003-491b-a694-25e0f0feccd1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.547 2 DEBUG nova.network.neutron [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:48:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 KiB/s wr, 128 op/s
Oct 10 23:48:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct 10 23:48:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct 10 23:48:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.798 2 DEBUG nova.compute.manager [req-2e9e0e44-bb71-4978-9b8e-083e41cb5895 req-34d9098d-51c7-4eff-aea4-52a61cc4f523 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Received event network-changed-cb54de0c-b523-42e1-a4c5-0e3f477d8960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.799 2 DEBUG nova.compute.manager [req-2e9e0e44-bb71-4978-9b8e-083e41cb5895 req-34d9098d-51c7-4eff-aea4-52a61cc4f523 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Refreshing instance network info cache due to event network-changed-cb54de0c-b523-42e1-a4c5-0e3f477d8960. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.799 2 DEBUG oslo_concurrency.lockutils [req-2e9e0e44-bb71-4978-9b8e-083e41cb5895 req-34d9098d-51c7-4eff-aea4-52a61cc4f523 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-1c478ad7-214b-4e9c-be93-5b836a13b5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:48:30 np0005480824 nova_compute[260089]: 2025-10-11 03:48:30.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.252 2 DEBUG nova.network.neutron [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Updating instance_info_cache with network_info: [{"id": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "address": "fa:16:3e:9f:da:cd", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb54de0c-b5", "ovs_interfaceid": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.266 2 DEBUG nova.network.neutron [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Successfully updated port: 66fcb192-9003-491b-a694-25e0f0feccd1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.269 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Releasing lock "refresh_cache-1c478ad7-214b-4e9c-be93-5b836a13b5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.270 2 DEBUG nova.compute.manager [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Instance network_info: |[{"id": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "address": "fa:16:3e:9f:da:cd", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb54de0c-b5", "ovs_interfaceid": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.270 2 DEBUG oslo_concurrency.lockutils [req-2e9e0e44-bb71-4978-9b8e-083e41cb5895 req-34d9098d-51c7-4eff-aea4-52a61cc4f523 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-1c478ad7-214b-4e9c-be93-5b836a13b5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.271 2 DEBUG nova.network.neutron [req-2e9e0e44-bb71-4978-9b8e-083e41cb5895 req-34d9098d-51c7-4eff-aea4-52a61cc4f523 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Refreshing network info cache for port cb54de0c-b523-42e1-a4c5-0e3f477d8960 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.277 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Start _get_guest_xml network_info=[{"id": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "address": "fa:16:3e:9f:da:cd", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb54de0c-b5", "ovs_interfaceid": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.280 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Acquiring lock "refresh_cache-aefb31cf-337d-446e-a617-c82e2e9b4809" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.280 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Acquired lock "refresh_cache-aefb31cf-337d-446e-a617-c82e2e9b4809" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.280 2 DEBUG nova.network.neutron [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.299 2 WARNING nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.307 2 DEBUG nova.virt.libvirt.host [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.308 2 DEBUG nova.virt.libvirt.host [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.312 2 DEBUG nova.virt.libvirt.host [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.313 2 DEBUG nova.virt.libvirt.host [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.313 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.314 2 DEBUG nova.virt.hardware [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.315 2 DEBUG nova.virt.hardware [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.315 2 DEBUG nova.virt.hardware [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.315 2 DEBUG nova.virt.hardware [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.316 2 DEBUG nova.virt.hardware [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.316 2 DEBUG nova.virt.hardware [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.316 2 DEBUG nova.virt.hardware [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.317 2 DEBUG nova.virt.hardware [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.317 2 DEBUG nova.virt.hardware [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.317 2 DEBUG nova.virt.hardware [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.318 2 DEBUG nova.virt.hardware [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.322 2 DEBUG oslo_concurrency.processutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.357 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.358 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.358 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.359 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.359 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.411 2 DEBUG nova.network.neutron [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:48:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct 10 23:48:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct 10 23:48:31 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct 10 23:48:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:48:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/522167663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.808 2 DEBUG oslo_concurrency.processutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.839 2 DEBUG nova.storage.rbd_utils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] rbd image 1c478ad7-214b-4e9c-be93-5b836a13b5f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.846 2 DEBUG oslo_concurrency.processutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:48:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3884406572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:48:31 np0005480824 nova_compute[260089]: 2025-10-11 03:48:31.878 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.155 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.158 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4706MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.158 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.158 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.261 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 1c478ad7-214b-4e9c-be93-5b836a13b5f3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.261 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance aefb31cf-337d-446e-a617-c82e2e9b4809 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.261 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.262 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.328 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:48:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231028779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.366 2 DEBUG oslo_concurrency.processutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.368 2 DEBUG nova.virt.libvirt.vif [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:48:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1799060481',display_name='tempest-VolumesActionsTest-instance-1799060481',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1799060481',id=3,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8004df44ba5045b6b3c7b5376587d790',ramdisk_id='',reservation_id='r-mhqoqoa7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1212390094',owner_user_name='tempest-VolumesActionsTest-1212390094-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:48:27Z,user_data=None,user_id='94437cd815c640b094db68c0d14ae5c0',uuid=1c478ad7-214b-4e9c-be93-5b836a13b5f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "address": "fa:16:3e:9f:da:cd", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb54de0c-b5", "ovs_interfaceid": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.369 2 DEBUG nova.network.os_vif_util [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Converting VIF {"id": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "address": "fa:16:3e:9f:da:cd", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb54de0c-b5", "ovs_interfaceid": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.370 2 DEBUG nova.network.os_vif_util [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:da:cd,bridge_name='br-int',has_traffic_filtering=True,id=cb54de0c-b523-42e1-a4c5-0e3f477d8960,network=Network(a3ff9216-2127-4ae5-9ba6-73e4685bcdc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb54de0c-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.372 2 DEBUG nova.objects.instance [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c478ad7-214b-4e9c-be93-5b836a13b5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.392 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  <uuid>1c478ad7-214b-4e9c-be93-5b836a13b5f3</uuid>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  <name>instance-00000003</name>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <nova:name>tempest-VolumesActionsTest-instance-1799060481</nova:name>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:48:31</nova:creationTime>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:        <nova:user uuid="94437cd815c640b094db68c0d14ae5c0">tempest-VolumesActionsTest-1212390094-project-member</nova:user>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:        <nova:project uuid="8004df44ba5045b6b3c7b5376587d790">tempest-VolumesActionsTest-1212390094</nova:project>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:        <nova:port uuid="cb54de0c-b523-42e1-a4c5-0e3f477d8960">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <entry name="serial">1c478ad7-214b-4e9c-be93-5b836a13b5f3</entry>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <entry name="uuid">1c478ad7-214b-4e9c-be93-5b836a13b5f3</entry>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/1c478ad7-214b-4e9c-be93-5b836a13b5f3_disk">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/1c478ad7-214b-4e9c-be93-5b836a13b5f3_disk.config">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:9f:da:cd"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <target dev="tapcb54de0c-b5"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/1c478ad7-214b-4e9c-be93-5b836a13b5f3/console.log" append="off"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:48:32 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:48:32 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:48:32 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:48:32 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.393 2 DEBUG nova.compute.manager [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Preparing to wait for external event network-vif-plugged-cb54de0c-b523-42e1-a4c5-0e3f477d8960 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.394 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.394 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.394 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.395 2 DEBUG nova.virt.libvirt.vif [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:48:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1799060481',display_name='tempest-VolumesActionsTest-instance-1799060481',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1799060481',id=3,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8004df44ba5045b6b3c7b5376587d790',ramdisk_id='',reservation_id='r-mhqoqoa7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1212390094',owner_user_name='tempest-VolumesActionsTest-1212390094-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:48:27Z,user_data=None,user_id='94437cd815c640b094db68c0d14ae5c0',uuid=1c478ad7-214b-4e9c-be93-5b836a13b5f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "address": "fa:16:3e:9f:da:cd", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb54de0c-b5", "ovs_interfaceid": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.396 2 DEBUG nova.network.os_vif_util [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Converting VIF {"id": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "address": "fa:16:3e:9f:da:cd", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb54de0c-b5", "ovs_interfaceid": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.396 2 DEBUG nova.network.os_vif_util [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:da:cd,bridge_name='br-int',has_traffic_filtering=True,id=cb54de0c-b523-42e1-a4c5-0e3f477d8960,network=Network(a3ff9216-2127-4ae5-9ba6-73e4685bcdc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb54de0c-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.397 2 DEBUG os_vif [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:da:cd,bridge_name='br-int',has_traffic_filtering=True,id=cb54de0c-b523-42e1-a4c5-0e3f477d8960,network=Network(a3ff9216-2127-4ae5-9ba6-73e4685bcdc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb54de0c-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.398 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.399 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.403 2 DEBUG nova.network.neutron [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Updating instance_info_cache with network_info: [{"id": "66fcb192-9003-491b-a694-25e0f0feccd1", "address": "fa:16:3e:03:f2:ae", "network": {"id": "96e29220-0426-44b4-b5aa-c255f37e21b7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1345267436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5825b55787104735a580132059839426", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66fcb192-90", "ovs_interfaceid": "66fcb192-9003-491b-a694-25e0f0feccd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.406 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcb54de0c-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.407 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcb54de0c-b5, col_values=(('external_ids', {'iface-id': 'cb54de0c-b523-42e1-a4c5-0e3f477d8960', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:da:cd', 'vm-uuid': '1c478ad7-214b-4e9c-be93-5b836a13b5f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:32 np0005480824 NetworkManager[44969]: <info>  [1760154512.4091] manager: (tapcb54de0c-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.418 2 INFO os_vif [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:da:cd,bridge_name='br-int',has_traffic_filtering=True,id=cb54de0c-b523-42e1-a4c5-0e3f477d8960,network=Network(a3ff9216-2127-4ae5-9ba6-73e4685bcdc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb54de0c-b5')#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.427 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Releasing lock "refresh_cache-aefb31cf-337d-446e-a617-c82e2e9b4809" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.427 2 DEBUG nova.compute.manager [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Instance network_info: |[{"id": "66fcb192-9003-491b-a694-25e0f0feccd1", "address": "fa:16:3e:03:f2:ae", "network": {"id": "96e29220-0426-44b4-b5aa-c255f37e21b7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1345267436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5825b55787104735a580132059839426", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66fcb192-90", "ovs_interfaceid": "66fcb192-9003-491b-a694-25e0f0feccd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.430 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Start _get_guest_xml network_info=[{"id": "66fcb192-9003-491b-a694-25e0f0feccd1", "address": "fa:16:3e:03:f2:ae", "network": {"id": "96e29220-0426-44b4-b5aa-c255f37e21b7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1345267436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5825b55787104735a580132059839426", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66fcb192-90", "ovs_interfaceid": "66fcb192-9003-491b-a694-25e0f0feccd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.435 2 WARNING nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.441 2 DEBUG nova.virt.libvirt.host [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.442 2 DEBUG nova.virt.libvirt.host [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.448 2 DEBUG nova.virt.libvirt.host [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.449 2 DEBUG nova.virt.libvirt.host [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.449 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.449 2 DEBUG nova.virt.hardware [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.450 2 DEBUG nova.virt.hardware [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.450 2 DEBUG nova.virt.hardware [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.450 2 DEBUG nova.virt.hardware [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.450 2 DEBUG nova.virt.hardware [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.451 2 DEBUG nova.virt.hardware [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.451 2 DEBUG nova.virt.hardware [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.451 2 DEBUG nova.virt.hardware [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.451 2 DEBUG nova.virt.hardware [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.451 2 DEBUG nova.virt.hardware [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.452 2 DEBUG nova.virt.hardware [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.454 2 DEBUG oslo_concurrency.processutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.496 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.497 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.497 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] No VIF found with MAC fa:16:3e:9f:da:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.498 2 INFO nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Using config drive#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.529 2 DEBUG nova.storage.rbd_utils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] rbd image 1c478ad7-214b-4e9c-be93-5b836a13b5f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.539 2 DEBUG nova.network.neutron [req-2e9e0e44-bb71-4978-9b8e-083e41cb5895 req-34d9098d-51c7-4eff-aea4-52a61cc4f523 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Updated VIF entry in instance network info cache for port cb54de0c-b523-42e1-a4c5-0e3f477d8960. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.539 2 DEBUG nova.network.neutron [req-2e9e0e44-bb71-4978-9b8e-083e41cb5895 req-34d9098d-51c7-4eff-aea4-52a61cc4f523 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Updating instance_info_cache with network_info: [{"id": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "address": "fa:16:3e:9f:da:cd", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb54de0c-b5", "ovs_interfaceid": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:48:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 145 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 10 MiB/s wr, 137 op/s
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.573 2 DEBUG oslo_concurrency.lockutils [req-2e9e0e44-bb71-4978-9b8e-083e41cb5895 req-34d9098d-51c7-4eff-aea4-52a61cc4f523 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-1c478ad7-214b-4e9c-be93-5b836a13b5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:48:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:48:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2730251701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.791 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.796 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.809 2 INFO nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Creating config drive at /var/lib/nova/instances/1c478ad7-214b-4e9c-be93-5b836a13b5f3/disk.config#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.815 2 DEBUG oslo_concurrency.processutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c478ad7-214b-4e9c-be93-5b836a13b5f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2w19ucuf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.848 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.872 2 DEBUG nova.compute.manager [req-09225cec-c8c8-4fab-a30f-08c3bbb26c03 req-4232762c-9a3d-4f80-bb58-aeebdee8df07 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Received event network-changed-66fcb192-9003-491b-a694-25e0f0feccd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.872 2 DEBUG nova.compute.manager [req-09225cec-c8c8-4fab-a30f-08c3bbb26c03 req-4232762c-9a3d-4f80-bb58-aeebdee8df07 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Refreshing instance network info cache due to event network-changed-66fcb192-9003-491b-a694-25e0f0feccd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.873 2 DEBUG oslo_concurrency.lockutils [req-09225cec-c8c8-4fab-a30f-08c3bbb26c03 req-4232762c-9a3d-4f80-bb58-aeebdee8df07 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-aefb31cf-337d-446e-a617-c82e2e9b4809" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.873 2 DEBUG oslo_concurrency.lockutils [req-09225cec-c8c8-4fab-a30f-08c3bbb26c03 req-4232762c-9a3d-4f80-bb58-aeebdee8df07 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-aefb31cf-337d-446e-a617-c82e2e9b4809" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.873 2 DEBUG nova.network.neutron [req-09225cec-c8c8-4fab-a30f-08c3bbb26c03 req-4232762c-9a3d-4f80-bb58-aeebdee8df07 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Refreshing network info cache for port 66fcb192-9003-491b-a694-25e0f0feccd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.874 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.874 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.950 2 DEBUG oslo_concurrency.processutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c478ad7-214b-4e9c-be93-5b836a13b5f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2w19ucuf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:48:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1540701913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:48:32 np0005480824 nova_compute[260089]: 2025-10-11 03:48:32.995 2 DEBUG nova.storage.rbd_utils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] rbd image 1c478ad7-214b-4e9c-be93-5b836a13b5f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.001 2 DEBUG oslo_concurrency.processutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c478ad7-214b-4e9c-be93-5b836a13b5f3/disk.config 1c478ad7-214b-4e9c-be93-5b836a13b5f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.030 2 DEBUG oslo_concurrency.processutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:33 np0005480824 podman[271709]: 2025-10-11 03:48:33.032202623 +0000 UTC m=+0.086370759 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.063 2 DEBUG nova.storage.rbd_utils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] rbd image aefb31cf-337d-446e-a617-c82e2e9b4809_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.072 2 DEBUG oslo_concurrency.processutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.216 2 DEBUG oslo_concurrency.processutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c478ad7-214b-4e9c-be93-5b836a13b5f3/disk.config 1c478ad7-214b-4e9c-be93-5b836a13b5f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.218 2 INFO nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Deleting local config drive /var/lib/nova/instances/1c478ad7-214b-4e9c-be93-5b836a13b5f3/disk.config because it was imported into RBD.#033[00m
Oct 10 23:48:33 np0005480824 kernel: tapcb54de0c-b5: entered promiscuous mode
Oct 10 23:48:33 np0005480824 NetworkManager[44969]: <info>  [1760154513.3223] manager: (tapcb54de0c-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Oct 10 23:48:33 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:33Z|00044|binding|INFO|Claiming lport cb54de0c-b523-42e1-a4c5-0e3f477d8960 for this chassis.
Oct 10 23:48:33 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:33Z|00045|binding|INFO|cb54de0c-b523-42e1-a4c5-0e3f477d8960: Claiming fa:16:3e:9f:da:cd 10.100.0.3
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.335 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:da:cd 10.100.0.3'], port_security=['fa:16:3e:9f:da:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1c478ad7-214b-4e9c-be93-5b836a13b5f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8004df44ba5045b6b3c7b5376587d790', 'neutron:revision_number': '2', 'neutron:security_group_ids': '431a4db7-da14-43f2-a384-76fa67eaa106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aab757e7-f51f-43c8-93d5-f3007ce26421, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=cb54de0c-b523-42e1-a4c5-0e3f477d8960) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.337 162245 INFO neutron.agent.ovn.metadata.agent [-] Port cb54de0c-b523-42e1-a4c5-0e3f477d8960 in datapath a3ff9216-2127-4ae5-9ba6-73e4685bcdc7 bound to our chassis#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.339 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3ff9216-2127-4ae5-9ba6-73e4685bcdc7#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.364 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[0a23031a-d7a5-4f1e-b302-e31e3a5d9fa2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.366 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3ff9216-21 in ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:48:33 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:33Z|00046|binding|INFO|Setting lport cb54de0c-b523-42e1-a4c5-0e3f477d8960 up in Southbound
Oct 10 23:48:33 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:33Z|00047|binding|INFO|Setting lport cb54de0c-b523-42e1-a4c5-0e3f477d8960 ovn-installed in OVS
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.371 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3ff9216-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.371 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[0f08781e-1c71-453d-9dcc-6f96d3b35587]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.374 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[3af88bcb-2888-4dcb-87b4-833f430d26d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:33 np0005480824 systemd-udevd[271820]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:48:33 np0005480824 systemd-machined[215071]: New machine qemu-3-instance-00000003.
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.394 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[6837194a-5a1b-4786-8b98-6fc2d9e92619]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Oct 10 23:48:33 np0005480824 NetworkManager[44969]: <info>  [1760154513.4144] device (tapcb54de0c-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:48:33 np0005480824 NetworkManager[44969]: <info>  [1760154513.4154] device (tapcb54de0c-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.418 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc438c0-6ab5-4d18-beea-57bd0be0cc86]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.457 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[53a76c9d-e4c3-4494-8026-9675af17ba61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 NetworkManager[44969]: <info>  [1760154513.4668] manager: (tapa3ff9216-20): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Oct 10 23:48:33 np0005480824 systemd-udevd[271823]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.469 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ce791a3c-9234-452c-88a3-88da5336c003]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.528 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[15323881-2ae8-4dd7-9c2e-07dc7a4138e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.531 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[c22be7d9-bc59-4604-a707-c0640cf93701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:48:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4227421390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:48:33 np0005480824 NetworkManager[44969]: <info>  [1760154513.5662] device (tapa3ff9216-20): carrier: link connected
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.570 2 DEBUG oslo_concurrency.processutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.572 2 DEBUG nova.virt.libvirt.vif [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:48:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2075905536',display_name='tempest-VolumesActionsTest-instance-2075905536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2075905536',id=4,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5825b55787104735a580132059839426',ramdisk_id='',reservation_id='r-2xi2qcn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-671828259',owner_user_name='tempest-VolumesActionsTest-671828259-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:48:29Z,user_data=None,user_id='2619b09d11614c958f6b7a5b9db7b922',uuid=aefb31cf-337d-446e-a617-c82e2e9b4809,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66fcb192-9003-491b-a694-25e0f0feccd1", "address": "fa:16:3e:03:f2:ae", "network": {"id": "96e29220-0426-44b4-b5aa-c255f37e21b7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1345267436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5825b55787104735a580132059839426", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66fcb192-90", "ovs_interfaceid": "66fcb192-9003-491b-a694-25e0f0feccd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.572 2 DEBUG nova.network.os_vif_util [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Converting VIF {"id": "66fcb192-9003-491b-a694-25e0f0feccd1", "address": "fa:16:3e:03:f2:ae", "network": {"id": "96e29220-0426-44b4-b5aa-c255f37e21b7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1345267436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5825b55787104735a580132059839426", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66fcb192-90", "ovs_interfaceid": "66fcb192-9003-491b-a694-25e0f0feccd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.573 2 DEBUG nova.network.os_vif_util [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:f2:ae,bridge_name='br-int',has_traffic_filtering=True,id=66fcb192-9003-491b-a694-25e0f0feccd1,network=Network(96e29220-0426-44b4-b5aa-c255f37e21b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66fcb192-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.574 2 DEBUG nova.objects.instance [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lazy-loading 'pci_devices' on Instance uuid aefb31cf-337d-446e-a617-c82e2e9b4809 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.574 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[32636102-42eb-48d1-a360-35101215a78a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.590 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  <uuid>aefb31cf-337d-446e-a617-c82e2e9b4809</uuid>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  <name>instance-00000004</name>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <nova:name>tempest-VolumesActionsTest-instance-2075905536</nova:name>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:48:32</nova:creationTime>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:        <nova:user uuid="2619b09d11614c958f6b7a5b9db7b922">tempest-VolumesActionsTest-671828259-project-member</nova:user>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:        <nova:project uuid="5825b55787104735a580132059839426">tempest-VolumesActionsTest-671828259</nova:project>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:        <nova:port uuid="66fcb192-9003-491b-a694-25e0f0feccd1">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <entry name="serial">aefb31cf-337d-446e-a617-c82e2e9b4809</entry>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <entry name="uuid">aefb31cf-337d-446e-a617-c82e2e9b4809</entry>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/aefb31cf-337d-446e-a617-c82e2e9b4809_disk">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/aefb31cf-337d-446e-a617-c82e2e9b4809_disk.config">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:03:f2:ae"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <target dev="tap66fcb192-90"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/aefb31cf-337d-446e-a617-c82e2e9b4809/console.log" append="off"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:48:33 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:48:33 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:48:33 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:48:33 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.590 2 DEBUG nova.compute.manager [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Preparing to wait for external event network-vif-plugged-66fcb192-9003-491b-a694-25e0f0feccd1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.591 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Acquiring lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.591 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.591 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.592 2 DEBUG nova.virt.libvirt.vif [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:48:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2075905536',display_name='tempest-VolumesActionsTest-instance-2075905536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2075905536',id=4,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5825b55787104735a580132059839426',ramdisk_id='',reservation_id='r-2xi2qcn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-671828259',owner_user_name='tempest-VolumesActionsTest-671828259-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:48:29Z,user_data=None,user_id='2619b09d11614c958f6b7a5b9db7b922',uuid=aefb31cf-337d-446e-a617-c82e2e9b4809,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66fcb192-9003-491b-a694-25e0f0feccd1", "address": "fa:16:3e:03:f2:ae", "network": {"id": "96e29220-0426-44b4-b5aa-c255f37e21b7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1345267436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5825b55787104735a580132059839426", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66fcb192-90", "ovs_interfaceid": "66fcb192-9003-491b-a694-25e0f0feccd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.592 2 DEBUG nova.network.os_vif_util [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Converting VIF {"id": "66fcb192-9003-491b-a694-25e0f0feccd1", "address": "fa:16:3e:03:f2:ae", "network": {"id": "96e29220-0426-44b4-b5aa-c255f37e21b7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1345267436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5825b55787104735a580132059839426", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66fcb192-90", "ovs_interfaceid": "66fcb192-9003-491b-a694-25e0f0feccd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.592 2 DEBUG nova.network.os_vif_util [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:f2:ae,bridge_name='br-int',has_traffic_filtering=True,id=66fcb192-9003-491b-a694-25e0f0feccd1,network=Network(96e29220-0426-44b4-b5aa-c255f37e21b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66fcb192-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.593 2 DEBUG os_vif [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:f2:ae,bridge_name='br-int',has_traffic_filtering=True,id=66fcb192-9003-491b-a694-25e0f0feccd1,network=Network(96e29220-0426-44b4-b5aa-c255f37e21b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66fcb192-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.593 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66fcb192-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.598 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66fcb192-90, col_values=(('external_ids', {'iface-id': '66fcb192-9003-491b-a694-25e0f0feccd1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:f2:ae', 'vm-uuid': 'aefb31cf-337d-446e-a617-c82e2e9b4809'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.598 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[28b854e7-761b-44d2-ad0d-4474bbef0349]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ff9216-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:b9:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398122, 'reachable_time': 41500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271853, 'error': None, 'target': 'ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:33 np0005480824 NetworkManager[44969]: <info>  [1760154513.6011] manager: (tap66fcb192-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.608 2 INFO os_vif [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:f2:ae,bridge_name='br-int',has_traffic_filtering=True,id=66fcb192-9003-491b-a694-25e0f0feccd1,network=Network(96e29220-0426-44b4-b5aa-c255f37e21b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66fcb192-90')#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.622 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[172e12ed-1c28-4626-9778-092339d2915e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:b98e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 398122, 'tstamp': 398122}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271855, 'error': None, 'target': 'ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.639 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[18270ff9-eb1a-4bf2-bb62-9464318f5b11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ff9216-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:b9:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398122, 'reachable_time': 41500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271856, 'error': None, 'target': 'ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.659 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.660 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.660 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] No VIF found with MAC fa:16:3e:03:f2:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.661 2 INFO nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Using config drive#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.674 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f503c2-5b4d-4d85-a1ab-fc25ba3eabed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.709 2 DEBUG nova.storage.rbd_utils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] rbd image aefb31cf-337d-446e-a617-c82e2e9b4809_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.748 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c102245e-05e7-4004-a98a-95d155c1039d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.750 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ff9216-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.750 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.751 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ff9216-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:33 np0005480824 NetworkManager[44969]: <info>  [1760154513.7547] manager: (tapa3ff9216-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct 10 23:48:33 np0005480824 kernel: tapa3ff9216-20: entered promiscuous mode
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.764 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3ff9216-20, col_values=(('external_ids', {'iface-id': '3db48d68-d901-4c48-99b4-f4d00f214317'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:33 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:33Z|00048|binding|INFO|Releasing lport 3db48d68-d901-4c48-99b4-f4d00f214317 from this chassis (sb_readonly=0)
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.791 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3ff9216-2127-4ae5-9ba6-73e4685bcdc7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3ff9216-2127-4ae5-9ba6-73e4685bcdc7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.792 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2bd265-6b21-4062-85e3-3aada6ec51e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.792 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/a3ff9216-2127-4ae5-9ba6-73e4685bcdc7.pid.haproxy
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID a3ff9216-2127-4ae5-9ba6-73e4685bcdc7
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:48:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:33.793 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'env', 'PROCESS_TAG=haproxy-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3ff9216-2127-4ae5-9ba6-73e4685bcdc7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.906 2 DEBUG nova.compute.manager [req-4221fe85-3bfd-429a-b553-d326963e2a1b req-43dc073c-fcc5-4c3d-8c76-201f446bf09b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Received event network-vif-plugged-cb54de0c-b523-42e1-a4c5-0e3f477d8960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.907 2 DEBUG oslo_concurrency.lockutils [req-4221fe85-3bfd-429a-b553-d326963e2a1b req-43dc073c-fcc5-4c3d-8c76-201f446bf09b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.907 2 DEBUG oslo_concurrency.lockutils [req-4221fe85-3bfd-429a-b553-d326963e2a1b req-43dc073c-fcc5-4c3d-8c76-201f446bf09b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.907 2 DEBUG oslo_concurrency.lockutils [req-4221fe85-3bfd-429a-b553-d326963e2a1b req-43dc073c-fcc5-4c3d-8c76-201f446bf09b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:33 np0005480824 nova_compute[260089]: 2025-10-11 03:48:33.907 2 DEBUG nova.compute.manager [req-4221fe85-3bfd-429a-b553-d326963e2a1b req-43dc073c-fcc5-4c3d-8c76-201f446bf09b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Processing event network-vif-plugged-cb54de0c-b523-42e1-a4c5-0e3f477d8960 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.240 2 INFO nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Creating config drive at /var/lib/nova/instances/aefb31cf-337d-446e-a617-c82e2e9b4809/disk.config#033[00m
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.246 2 DEBUG oslo_concurrency.processutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aefb31cf-337d-446e-a617-c82e2e9b4809/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvbf562_8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:34 np0005480824 podman[271909]: 2025-10-11 03:48:34.262930113 +0000 UTC m=+0.115063395 container create 7163dc490c36092a9f2183e4e707bf6d635b36671ac19d0e3204cc4a62b0a507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.276 2 DEBUG nova.network.neutron [req-09225cec-c8c8-4fab-a30f-08c3bbb26c03 req-4232762c-9a3d-4f80-bb58-aeebdee8df07 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Updated VIF entry in instance network info cache for port 66fcb192-9003-491b-a694-25e0f0feccd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.277 2 DEBUG nova.network.neutron [req-09225cec-c8c8-4fab-a30f-08c3bbb26c03 req-4232762c-9a3d-4f80-bb58-aeebdee8df07 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Updating instance_info_cache with network_info: [{"id": "66fcb192-9003-491b-a694-25e0f0feccd1", "address": "fa:16:3e:03:f2:ae", "network": {"id": "96e29220-0426-44b4-b5aa-c255f37e21b7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1345267436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5825b55787104735a580132059839426", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66fcb192-90", "ovs_interfaceid": "66fcb192-9003-491b-a694-25e0f0feccd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:48:34 np0005480824 podman[271909]: 2025-10-11 03:48:34.196928616 +0000 UTC m=+0.049061918 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.294 2 DEBUG oslo_concurrency.lockutils [req-09225cec-c8c8-4fab-a30f-08c3bbb26c03 req-4232762c-9a3d-4f80-bb58-aeebdee8df07 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-aefb31cf-337d-446e-a617-c82e2e9b4809" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:48:34 np0005480824 systemd[1]: Started libpod-conmon-7163dc490c36092a9f2183e4e707bf6d635b36671ac19d0e3204cc4a62b0a507.scope.
Oct 10 23:48:34 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:48:34 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e077f07819796831bef950a08fd6e38ab216a4d23629bdfa298c64e9d95d718/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:34 np0005480824 podman[271909]: 2025-10-11 03:48:34.368500522 +0000 UTC m=+0.220633824 container init 7163dc490c36092a9f2183e4e707bf6d635b36671ac19d0e3204cc4a62b0a507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:48:34 np0005480824 podman[271909]: 2025-10-11 03:48:34.377210218 +0000 UTC m=+0.229343500 container start 7163dc490c36092a9f2183e4e707bf6d635b36671ac19d0e3204cc4a62b0a507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.399 2 DEBUG oslo_concurrency.processutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aefb31cf-337d-446e-a617-c82e2e9b4809/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvbf562_8" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:34 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[271963]: [NOTICE]   (271974) : New worker (271984) forked
Oct 10 23:48:34 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[271963]: [NOTICE]   (271974) : Loading success.
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.432 2 DEBUG nova.storage.rbd_utils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] rbd image aefb31cf-337d-446e-a617-c82e2e9b4809_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.435 2 DEBUG oslo_concurrency.processutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aefb31cf-337d-446e-a617-c82e2e9b4809/disk.config aefb31cf-337d-446e-a617-c82e2e9b4809_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:48:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 178 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 11 MiB/s wr, 124 op/s
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.589 2 DEBUG oslo_concurrency.processutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aefb31cf-337d-446e-a617-c82e2e9b4809/disk.config aefb31cf-337d-446e-a617-c82e2e9b4809_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.590 2 INFO nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Deleting local config drive /var/lib/nova/instances/aefb31cf-337d-446e-a617-c82e2e9b4809/disk.config because it was imported into RBD.#033[00m
Oct 10 23:48:34 np0005480824 kernel: tap66fcb192-90: entered promiscuous mode
Oct 10 23:48:34 np0005480824 NetworkManager[44969]: <info>  [1760154514.6459] manager: (tap66fcb192-90): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Oct 10 23:48:34 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:34Z|00049|binding|INFO|Claiming lport 66fcb192-9003-491b-a694-25e0f0feccd1 for this chassis.
Oct 10 23:48:34 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:34Z|00050|binding|INFO|66fcb192-9003-491b-a694-25e0f0feccd1: Claiming fa:16:3e:03:f2:ae 10.100.0.8
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:34 np0005480824 NetworkManager[44969]: <info>  [1760154514.6684] device (tap66fcb192-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:48:34 np0005480824 NetworkManager[44969]: <info>  [1760154514.6696] device (tap66fcb192-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.675 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:f2:ae 10.100.0.8'], port_security=['fa:16:3e:03:f2:ae 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'aefb31cf-337d-446e-a617-c82e2e9b4809', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96e29220-0426-44b4-b5aa-c255f37e21b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5825b55787104735a580132059839426', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f2cedaad-1af4-4cf5-8d25-c0107ac10f73', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbe50a8a-95d0-45bf-b049-b26465cb5972, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=66fcb192-9003-491b-a694-25e0f0feccd1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.676 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 66fcb192-9003-491b-a694-25e0f0feccd1 in datapath 96e29220-0426-44b4-b5aa-c255f37e21b7 bound to our chassis#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.677 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 96e29220-0426-44b4-b5aa-c255f37e21b7#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.688 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1269010e-d074-4b2d-b209-7bb41dcb141e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.689 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap96e29220-01 in ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.691 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap96e29220-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.691 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[63a38e81-0865-47b5-8dc2-01dda530db85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.692 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[393ee854-5940-48da-8a9f-0603cf841980]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.709 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[b7605930-8d41-489d-ab45-2ed48cdde7f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 systemd-machined[215071]: New machine qemu-4-instance-00000004.
Oct 10 23:48:34 np0005480824 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.734 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4045c515-17eb-40b9-b913-a62e460ac710]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:34 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:34Z|00051|binding|INFO|Setting lport 66fcb192-9003-491b-a694-25e0f0feccd1 ovn-installed in OVS
Oct 10 23:48:34 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:34Z|00052|binding|INFO|Setting lport 66fcb192-9003-491b-a694-25e0f0feccd1 up in Southbound
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.760 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[644adc63-535e-4c01-9593-ed84d964eb13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 NetworkManager[44969]: <info>  [1760154514.7667] manager: (tap96e29220-00): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.765 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2911e7f5-9daf-41bd-838d-9e718085c33b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.799 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[de9c9111-ae0f-4fb7-883a-32060ed7c2db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.803 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[438fd219-3327-43e3-bfb8-4a9b905e7f8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 NetworkManager[44969]: <info>  [1760154514.8286] device (tap96e29220-00): carrier: link connected
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.836 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[170966d1-533a-4063-a1d1-bfe614ae9176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.852 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[5402ff6a-5c95-48b4-b681-66309745a68e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96e29220-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:09:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398248, 'reachable_time': 27354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272055, 'error': None, 'target': 'ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.867 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[fbcddaed-1dfb-4b2c-8dc1-579a49c0fa87]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe37:90d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 398248, 'tstamp': 398248}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272056, 'error': None, 'target': 'ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.886 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8edc98-e6a9-4736-8778-498d07de2beb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96e29220-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:09:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398248, 'reachable_time': 27354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272057, 'error': None, 'target': 'ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.918 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[eacaff80-5552-42a9-9c24-010f1dc2b4d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.991 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4df5057d-b7e6-4788-9928-e14b9560ae4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.993 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96e29220-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.994 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:48:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:34.994 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96e29220-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:34 np0005480824 nova_compute[260089]: 2025-10-11 03:48:34.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:34 np0005480824 NetworkManager[44969]: <info>  [1760154514.9980] manager: (tap96e29220-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 10 23:48:34 np0005480824 kernel: tap96e29220-00: entered promiscuous mode
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:35.002 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap96e29220-00, col_values=(('external_ids', {'iface-id': 'e58e17e5-9240-4652-9b87-7ecc4df4bf3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:35 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:35Z|00053|binding|INFO|Releasing lport e58e17e5-9240-4652-9b87-7ecc4df4bf3a from this chassis (sb_readonly=0)
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:35.006 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/96e29220-0426-44b4-b5aa-c255f37e21b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/96e29220-0426-44b4-b5aa-c255f37e21b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:35.009 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2bf6b8-9d03-4c9a-bd52-9e34c3d67da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:35.010 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-96e29220-0426-44b4-b5aa-c255f37e21b7
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/96e29220-0426-44b4-b5aa-c255f37e21b7.pid.haproxy
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 96e29220-0426-44b4-b5aa-c255f37e21b7
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:48:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:35.012 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7', 'env', 'PROCESS_TAG=haproxy-96e29220-0426-44b4-b5aa-c255f37e21b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/96e29220-0426-44b4-b5aa-c255f37e21b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.069 2 DEBUG nova.compute.manager [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.069 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154515.0687451, 1c478ad7-214b-4e9c-be93-5b836a13b5f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.070 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] VM Started (Lifecycle Event)#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.082 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.086 2 INFO nova.virt.libvirt.driver [-] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Instance spawned successfully.#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.086 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.102 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.105 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.118 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.118 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.118 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.119 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.119 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.119 2 DEBUG nova.virt.libvirt.driver [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.122 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.122 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154515.0689044, 1c478ad7-214b-4e9c-be93-5b836a13b5f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.123 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.160 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.171 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154515.0720587, 1c478ad7-214b-4e9c-be93-5b836a13b5f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.172 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.176 2 INFO nova.compute.manager [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Took 7.35 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.177 2 DEBUG nova.compute.manager [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.204 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.208 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.238 2 INFO nova.compute.manager [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Took 9.14 seconds to build instance.#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.258 2 DEBUG oslo_concurrency.lockutils [None req-234acdf2-c10f-4f35-bd03-efb707999418 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:35 np0005480824 podman[272131]: 2025-10-11 03:48:35.473875986 +0000 UTC m=+0.097883950 container create a66dd4630433d2d0796adcc8cec0fb69045665cf8b85c3cfd9f06c04ac42385a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 10 23:48:35 np0005480824 podman[272131]: 2025-10-11 03:48:35.405879652 +0000 UTC m=+0.029887606 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:48:35 np0005480824 systemd[1]: Started libpod-conmon-a66dd4630433d2d0796adcc8cec0fb69045665cf8b85c3cfd9f06c04ac42385a.scope.
Oct 10 23:48:35 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:48:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331b4600df57944982afb8c6b6b676ae78e65e3481f51d77a0fd1b8f615f1c08/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:35 np0005480824 podman[272131]: 2025-10-11 03:48:35.58044221 +0000 UTC m=+0.204450184 container init a66dd4630433d2d0796adcc8cec0fb69045665cf8b85c3cfd9f06c04ac42385a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 23:48:35 np0005480824 podman[272131]: 2025-10-11 03:48:35.593975438 +0000 UTC m=+0.217983372 container start a66dd4630433d2d0796adcc8cec0fb69045665cf8b85c3cfd9f06c04ac42385a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 10 23:48:35 np0005480824 neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7[272144]: [NOTICE]   (272148) : New worker (272150) forked
Oct 10 23:48:35 np0005480824 neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7[272144]: [NOTICE]   (272148) : Loading success.
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.868 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154515.866697, aefb31cf-337d-446e-a617-c82e2e9b4809 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.868 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] VM Started (Lifecycle Event)#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.874 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.876 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154500.8740067, 95bebf7e-4285-4364-8951-6f2305250e86 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.877 2 INFO nova.compute.manager [-] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.892 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.898 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154515.867093, aefb31cf-337d-446e-a617-c82e2e9b4809 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.898 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.911 2 DEBUG nova.compute.manager [None req-f094e145-9383-45e9-84ca-01bcf10efa7e - - - - - -] [instance: 95bebf7e-4285-4364-8951-6f2305250e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.922 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.928 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:48:35 np0005480824 nova_compute[260089]: 2025-10-11 03:48:35.948 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.044 2 DEBUG nova.compute.manager [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Received event network-vif-plugged-cb54de0c-b523-42e1-a4c5-0e3f477d8960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.044 2 DEBUG oslo_concurrency.lockutils [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.045 2 DEBUG oslo_concurrency.lockutils [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.049 2 DEBUG oslo_concurrency.lockutils [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.049 2 DEBUG nova.compute.manager [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] No waiting events found dispatching network-vif-plugged-cb54de0c-b523-42e1-a4c5-0e3f477d8960 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.049 2 WARNING nova.compute.manager [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Received unexpected event network-vif-plugged-cb54de0c-b523-42e1-a4c5-0e3f477d8960 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.050 2 DEBUG nova.compute.manager [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Received event network-vif-plugged-66fcb192-9003-491b-a694-25e0f0feccd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.050 2 DEBUG oslo_concurrency.lockutils [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.050 2 DEBUG oslo_concurrency.lockutils [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.051 2 DEBUG oslo_concurrency.lockutils [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.051 2 DEBUG nova.compute.manager [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Processing event network-vif-plugged-66fcb192-9003-491b-a694-25e0f0feccd1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.051 2 DEBUG nova.compute.manager [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Received event network-vif-plugged-66fcb192-9003-491b-a694-25e0f0feccd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.052 2 DEBUG oslo_concurrency.lockutils [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.053 2 DEBUG oslo_concurrency.lockutils [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.053 2 DEBUG oslo_concurrency.lockutils [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.053 2 DEBUG nova.compute.manager [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] No waiting events found dispatching network-vif-plugged-66fcb192-9003-491b-a694-25e0f0feccd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.053 2 WARNING nova.compute.manager [req-318a9d17-56d6-4825-9f5c-b5a0f9e9b1c9 req-53c96283-1442-4017-a9eb-8c367963cd42 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Received unexpected event network-vif-plugged-66fcb192-9003-491b-a694-25e0f0feccd1 for instance with vm_state building and task_state spawning.#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.055 2 DEBUG nova.compute.manager [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.064 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154516.062769, aefb31cf-337d-446e-a617-c82e2e9b4809 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.065 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.067 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.071 2 INFO nova.virt.libvirt.driver [-] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Instance spawned successfully.#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.072 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.093 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.101 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.107 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.108 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.108 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.109 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.110 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.111 2 DEBUG nova.virt.libvirt.driver [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.123 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.168 2 INFO nova.compute.manager [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Took 6.71 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.168 2 DEBUG nova.compute.manager [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.246 2 INFO nova.compute.manager [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Took 7.66 seconds to build instance.#033[00m
Oct 10 23:48:36 np0005480824 nova_compute[260089]: 2025-10-11 03:48:36.262 2 DEBUG oslo_concurrency.lockutils [None req-58af43c6-5c76-4b83-9e29-53b818a5fbcb 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 178 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 11 MiB/s wr, 111 op/s
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.653 2 DEBUG oslo_concurrency.lockutils [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.654 2 DEBUG oslo_concurrency.lockutils [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.654 2 DEBUG oslo_concurrency.lockutils [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.654 2 DEBUG oslo_concurrency.lockutils [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.655 2 DEBUG oslo_concurrency.lockutils [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.656 2 INFO nova.compute.manager [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Terminating instance#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.657 2 DEBUG nova.compute.manager [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:48:37 np0005480824 kernel: tapcb54de0c-b5 (unregistering): left promiscuous mode
Oct 10 23:48:37 np0005480824 NetworkManager[44969]: <info>  [1760154517.7033] device (tapcb54de0c-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:48:37 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:37Z|00054|binding|INFO|Releasing lport cb54de0c-b523-42e1-a4c5-0e3f477d8960 from this chassis (sb_readonly=0)
Oct 10 23:48:37 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:37Z|00055|binding|INFO|Setting lport cb54de0c-b523-42e1-a4c5-0e3f477d8960 down in Southbound
Oct 10 23:48:37 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:37Z|00056|binding|INFO|Removing iface tapcb54de0c-b5 ovn-installed in OVS
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:37.771 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:da:cd 10.100.0.3'], port_security=['fa:16:3e:9f:da:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1c478ad7-214b-4e9c-be93-5b836a13b5f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8004df44ba5045b6b3c7b5376587d790', 'neutron:revision_number': '4', 'neutron:security_group_ids': '431a4db7-da14-43f2-a384-76fa67eaa106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aab757e7-f51f-43c8-93d5-f3007ce26421, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=cb54de0c-b523-42e1-a4c5-0e3f477d8960) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:48:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:37.772 162245 INFO neutron.agent.ovn.metadata.agent [-] Port cb54de0c-b523-42e1-a4c5-0e3f477d8960 in datapath a3ff9216-2127-4ae5-9ba6-73e4685bcdc7 unbound from our chassis#033[00m
Oct 10 23:48:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:37.773 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3ff9216-2127-4ae5-9ba6-73e4685bcdc7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:48:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:37.775 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[74586c48-2b5a-4bad-a7c5-fbe8d4fa3ed0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:37.775 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7 namespace which is not needed anymore#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:37 np0005480824 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 10 23:48:37 np0005480824 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 3.933s CPU time.
Oct 10 23:48:37 np0005480824 systemd-machined[215071]: Machine qemu-3-instance-00000003 terminated.
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.896 2 INFO nova.virt.libvirt.driver [-] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Instance destroyed successfully.#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.896 2 DEBUG nova.objects.instance [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lazy-loading 'resources' on Instance uuid 1c478ad7-214b-4e9c-be93-5b836a13b5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.909 2 DEBUG nova.virt.libvirt.vif [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:48:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1799060481',display_name='tempest-VolumesActionsTest-instance-1799060481',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1799060481',id=3,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:48:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8004df44ba5045b6b3c7b5376587d790',ramdisk_id='',reservation_id='r-mhqoqoa7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1212390094',owner_user_name='tempest-VolumesActionsTest-1212390094-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:48:35Z,user_data=None,user_id='94437cd815c640b094db68c0d14ae5c0',uuid=1c478ad7-214b-4e9c-be93-5b836a13b5f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "address": "fa:16:3e:9f:da:cd", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb54de0c-b5", "ovs_interfaceid": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.909 2 DEBUG nova.network.os_vif_util [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Converting VIF {"id": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "address": "fa:16:3e:9f:da:cd", "network": {"id": "a3ff9216-2127-4ae5-9ba6-73e4685bcdc7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-418846116-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8004df44ba5045b6b3c7b5376587d790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb54de0c-b5", "ovs_interfaceid": "cb54de0c-b523-42e1-a4c5-0e3f477d8960", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.910 2 DEBUG nova.network.os_vif_util [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:da:cd,bridge_name='br-int',has_traffic_filtering=True,id=cb54de0c-b523-42e1-a4c5-0e3f477d8960,network=Network(a3ff9216-2127-4ae5-9ba6-73e4685bcdc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb54de0c-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.911 2 DEBUG os_vif [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:da:cd,bridge_name='br-int',has_traffic_filtering=True,id=cb54de0c-b523-42e1-a4c5-0e3f477d8960,network=Network(a3ff9216-2127-4ae5-9ba6-73e4685bcdc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb54de0c-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb54de0c-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:48:37 np0005480824 nova_compute[260089]: 2025-10-11 03:48:37.925 2 INFO os_vif [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:da:cd,bridge_name='br-int',has_traffic_filtering=True,id=cb54de0c-b523-42e1-a4c5-0e3f477d8960,network=Network(a3ff9216-2127-4ae5-9ba6-73e4685bcdc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb54de0c-b5')#033[00m
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006919304917952725 of space, bias 1.0, pg target 0.20757914753858175 quantized to 32 (current 32)
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 8.902699094875301e-07 of space, bias 1.0, pg target 0.00026708097284625906 quantized to 32 (current 32)
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.001042060929055154 of space, bias 1.0, pg target 0.3126182787165462 quantized to 32 (current 32)
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:48:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:48:37 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[271963]: [NOTICE]   (271974) : haproxy version is 2.8.14-c23fe91
Oct 10 23:48:37 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[271963]: [NOTICE]   (271974) : path to executable is /usr/sbin/haproxy
Oct 10 23:48:37 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[271963]: [WARNING]  (271974) : Exiting Master process...
Oct 10 23:48:37 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[271963]: [WARNING]  (271974) : Exiting Master process...
Oct 10 23:48:37 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[271963]: [ALERT]    (271974) : Current worker (271984) exited with code 143 (Terminated)
Oct 10 23:48:37 np0005480824 neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7[271963]: [WARNING]  (271974) : All workers exited. Exiting... (0)
Oct 10 23:48:37 np0005480824 systemd[1]: libpod-7163dc490c36092a9f2183e4e707bf6d635b36671ac19d0e3204cc4a62b0a507.scope: Deactivated successfully.
Oct 10 23:48:37 np0005480824 podman[272183]: 2025-10-11 03:48:37.945644269 +0000 UTC m=+0.062169937 container died 7163dc490c36092a9f2183e4e707bf6d635b36671ac19d0e3204cc4a62b0a507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 10 23:48:37 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7163dc490c36092a9f2183e4e707bf6d635b36671ac19d0e3204cc4a62b0a507-userdata-shm.mount: Deactivated successfully.
Oct 10 23:48:37 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9e077f07819796831bef950a08fd6e38ab216a4d23629bdfa298c64e9d95d718-merged.mount: Deactivated successfully.
Oct 10 23:48:37 np0005480824 podman[272183]: 2025-10-11 03:48:37.994074672 +0000 UTC m=+0.110600340 container cleanup 7163dc490c36092a9f2183e4e707bf6d635b36671ac19d0e3204cc4a62b0a507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 10 23:48:38 np0005480824 systemd[1]: libpod-conmon-7163dc490c36092a9f2183e4e707bf6d635b36671ac19d0e3204cc4a62b0a507.scope: Deactivated successfully.
Oct 10 23:48:38 np0005480824 podman[272239]: 2025-10-11 03:48:38.086814599 +0000 UTC m=+0.069410249 container remove 7163dc490c36092a9f2183e4e707bf6d635b36671ac19d0e3204cc4a62b0a507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 10 23:48:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:38.095 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4b8ee0-e54c-4c4a-9e68-b8977d558cf3]: (4, ('Sat Oct 11 03:48:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7 (7163dc490c36092a9f2183e4e707bf6d635b36671ac19d0e3204cc4a62b0a507)\n7163dc490c36092a9f2183e4e707bf6d635b36671ac19d0e3204cc4a62b0a507\nSat Oct 11 03:48:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7 (7163dc490c36092a9f2183e4e707bf6d635b36671ac19d0e3204cc4a62b0a507)\n7163dc490c36092a9f2183e4e707bf6d635b36671ac19d0e3204cc4a62b0a507\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:38.098 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3c2c69-51c1-46df-9e3a-98fc434e3b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:38.099 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ff9216-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:38 np0005480824 kernel: tapa3ff9216-20: left promiscuous mode
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.116 2 DEBUG nova.compute.manager [req-66142d1f-ca07-4bd5-bb6a-953187bb5236 req-454d96e2-93f1-4374-b56f-4123185aaaee 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Received event network-vif-unplugged-cb54de0c-b523-42e1-a4c5-0e3f477d8960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.116 2 DEBUG oslo_concurrency.lockutils [req-66142d1f-ca07-4bd5-bb6a-953187bb5236 req-454d96e2-93f1-4374-b56f-4123185aaaee 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.119 2 DEBUG oslo_concurrency.lockutils [req-66142d1f-ca07-4bd5-bb6a-953187bb5236 req-454d96e2-93f1-4374-b56f-4123185aaaee 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.119 2 DEBUG oslo_concurrency.lockutils [req-66142d1f-ca07-4bd5-bb6a-953187bb5236 req-454d96e2-93f1-4374-b56f-4123185aaaee 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.120 2 DEBUG nova.compute.manager [req-66142d1f-ca07-4bd5-bb6a-953187bb5236 req-454d96e2-93f1-4374-b56f-4123185aaaee 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] No waiting events found dispatching network-vif-unplugged-cb54de0c-b523-42e1-a4c5-0e3f477d8960 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.120 2 DEBUG nova.compute.manager [req-66142d1f-ca07-4bd5-bb6a-953187bb5236 req-454d96e2-93f1-4374-b56f-4123185aaaee 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Received event network-vif-unplugged-cb54de0c-b523-42e1-a4c5-0e3f477d8960 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:48:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:38.120 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4c6f0ee3-de20-4f51-98be-e6e96b1ebc66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:38.151 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[dfaac584-d5a9-49cd-a36e-36e82bebf43a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:38.153 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b32a7d-45ef-4177-b92a-ff700ba2c6f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:38.183 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[289f42d2-b2e2-4369-b795-e7e696f8483d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398111, 'reachable_time': 37326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272254, 'error': None, 'target': 'ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:38 np0005480824 systemd[1]: run-netns-ovnmeta\x2da3ff9216\x2d2127\x2d4ae5\x2d9ba6\x2d73e4685bcdc7.mount: Deactivated successfully.
Oct 10 23:48:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:38.193 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3ff9216-2127-4ae5-9ba6-73e4685bcdc7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:48:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:38.193 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f66e7d-fd1d-45fa-8887-cc609238e6be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:48:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4796 writes, 21K keys, 4796 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4796 writes, 4796 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1510 writes, 7067 keys, 1510 commit groups, 1.0 writes per commit group, ingest: 9.41 MB, 0.02 MB/s#012Interval WAL: 1510 writes, 1510 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    116.8      0.21              0.09        12    0.017       0      0       0.0       0.0#012  L6      1/0    7.10 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    168.1    137.8      0.56              0.31        11    0.051     48K   5795       0.0       0.0#012 Sum      1/0    7.10 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    123.0    132.2      0.77              0.40        23    0.033     48K   5795       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.3    136.4    136.3      0.39              0.22        12    0.033     28K   3615       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    168.1    137.8      0.56              0.31        11    0.051     48K   5795       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    119.0      0.20              0.09        11    0.018       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 0.8 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5617dbc851f0#2 capacity: 304.00 MB usage: 8.55 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000146 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(555,8.15 MB,2.67952%) FilterBlock(24,142.55 KB,0.0457914%) IndexBlock(24,269.39 KB,0.0865384%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.553 2 INFO nova.virt.libvirt.driver [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Deleting instance files /var/lib/nova/instances/1c478ad7-214b-4e9c-be93-5b836a13b5f3_del#033[00m
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.555 2 INFO nova.virt.libvirt.driver [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Deletion of /var/lib/nova/instances/1c478ad7-214b-4e9c-be93-5b836a13b5f3_del complete#033[00m
Oct 10 23:48:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 1.1 GiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 130 MiB/s wr, 536 op/s
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.630 2 INFO nova.compute.manager [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Took 0.97 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.631 2 DEBUG oslo.service.loopingcall [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.631 2 DEBUG nova.compute.manager [-] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:48:38 np0005480824 nova_compute[260089]: 2025-10-11 03:48:38.631 2 DEBUG nova.network.neutron [-] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:48:39 np0005480824 nova_compute[260089]: 2025-10-11 03:48:39.185 2 DEBUG nova.network.neutron [-] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:48:39 np0005480824 nova_compute[260089]: 2025-10-11 03:48:39.203 2 INFO nova.compute.manager [-] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Took 0.57 seconds to deallocate network for instance.#033[00m
Oct 10 23:48:39 np0005480824 nova_compute[260089]: 2025-10-11 03:48:39.246 2 DEBUG oslo_concurrency.lockutils [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:39 np0005480824 nova_compute[260089]: 2025-10-11 03:48:39.247 2 DEBUG oslo_concurrency.lockutils [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:39 np0005480824 nova_compute[260089]: 2025-10-11 03:48:39.260 2 DEBUG nova.compute.manager [req-7e1ff54e-d6e3-4acc-b238-8ad12333bf19 req-03cc05da-b816-4bb7-a0b7-cc8070fec115 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Received event network-vif-deleted-cb54de0c-b523-42e1-a4c5-0e3f477d8960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:39 np0005480824 nova_compute[260089]: 2025-10-11 03:48:39.335 2 DEBUG oslo_concurrency.processutils [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:48:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct 10 23:48:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct 10 23:48:39 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct 10 23:48:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:48:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2503342897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:48:39 np0005480824 nova_compute[260089]: 2025-10-11 03:48:39.841 2 DEBUG oslo_concurrency.processutils [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:39 np0005480824 nova_compute[260089]: 2025-10-11 03:48:39.853 2 DEBUG nova.compute.provider_tree [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:48:39 np0005480824 nova_compute[260089]: 2025-10-11 03:48:39.874 2 DEBUG nova.scheduler.client.report [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:48:39 np0005480824 nova_compute[260089]: 2025-10-11 03:48:39.906 2 DEBUG oslo_concurrency.lockutils [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:39 np0005480824 nova_compute[260089]: 2025-10-11 03:48:39.933 2 INFO nova.scheduler.client.report [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Deleted allocations for instance 1c478ad7-214b-4e9c-be93-5b836a13b5f3#033[00m
Oct 10 23:48:40 np0005480824 nova_compute[260089]: 2025-10-11 03:48:40.026 2 DEBUG oslo_concurrency.lockutils [None req-9831d161-4be8-4e29-bff3-3cf22efe2f53 94437cd815c640b094db68c0d14ae5c0 8004df44ba5045b6b3c7b5376587d790 - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:48:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/806412281' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:48:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:48:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/806412281' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:48:40 np0005480824 nova_compute[260089]: 2025-10-11 03:48:40.188 2 DEBUG nova.compute.manager [req-06b8c60f-5c86-4462-9571-47a26c92a936 req-e2acaeb9-0bbd-42b2-86ee-fd7e6235ff14 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Received event network-vif-plugged-cb54de0c-b523-42e1-a4c5-0e3f477d8960 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:40 np0005480824 nova_compute[260089]: 2025-10-11 03:48:40.188 2 DEBUG oslo_concurrency.lockutils [req-06b8c60f-5c86-4462-9571-47a26c92a936 req-e2acaeb9-0bbd-42b2-86ee-fd7e6235ff14 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:40 np0005480824 nova_compute[260089]: 2025-10-11 03:48:40.188 2 DEBUG oslo_concurrency.lockutils [req-06b8c60f-5c86-4462-9571-47a26c92a936 req-e2acaeb9-0bbd-42b2-86ee-fd7e6235ff14 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:40 np0005480824 nova_compute[260089]: 2025-10-11 03:48:40.189 2 DEBUG oslo_concurrency.lockutils [req-06b8c60f-5c86-4462-9571-47a26c92a936 req-e2acaeb9-0bbd-42b2-86ee-fd7e6235ff14 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1c478ad7-214b-4e9c-be93-5b836a13b5f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:40 np0005480824 nova_compute[260089]: 2025-10-11 03:48:40.189 2 DEBUG nova.compute.manager [req-06b8c60f-5c86-4462-9571-47a26c92a936 req-e2acaeb9-0bbd-42b2-86ee-fd7e6235ff14 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] No waiting events found dispatching network-vif-plugged-cb54de0c-b523-42e1-a4c5-0e3f477d8960 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:48:40 np0005480824 nova_compute[260089]: 2025-10-11 03:48:40.189 2 WARNING nova.compute.manager [req-06b8c60f-5c86-4462-9571-47a26c92a936 req-e2acaeb9-0bbd-42b2-86ee-fd7e6235ff14 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Received unexpected event network-vif-plugged-cb54de0c-b523-42e1-a4c5-0e3f477d8960 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 23:48:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 1.1 GiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 117 MiB/s wr, 481 op/s
Oct 10 23:48:40 np0005480824 nova_compute[260089]: 2025-10-11 03:48:40.996 2 DEBUG oslo_concurrency.lockutils [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Acquiring lock "aefb31cf-337d-446e-a617-c82e2e9b4809" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:40 np0005480824 nova_compute[260089]: 2025-10-11 03:48:40.997 2 DEBUG oslo_concurrency.lockutils [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:40 np0005480824 nova_compute[260089]: 2025-10-11 03:48:40.997 2 DEBUG oslo_concurrency.lockutils [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Acquiring lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:40 np0005480824 nova_compute[260089]: 2025-10-11 03:48:40.998 2 DEBUG oslo_concurrency.lockutils [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:40 np0005480824 nova_compute[260089]: 2025-10-11 03:48:40.998 2 DEBUG oslo_concurrency.lockutils [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.000 2 INFO nova.compute.manager [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Terminating instance#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.002 2 DEBUG nova.compute.manager [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:48:41 np0005480824 kernel: tap66fcb192-90 (unregistering): left promiscuous mode
Oct 10 23:48:41 np0005480824 NetworkManager[44969]: <info>  [1760154521.0581] device (tap66fcb192-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:48:41 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:41Z|00057|binding|INFO|Releasing lport 66fcb192-9003-491b-a694-25e0f0feccd1 from this chassis (sb_readonly=0)
Oct 10 23:48:41 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:41Z|00058|binding|INFO|Setting lport 66fcb192-9003-491b-a694-25e0f0feccd1 down in Southbound
Oct 10 23:48:41 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:41Z|00059|binding|INFO|Removing iface tap66fcb192-90 ovn-installed in OVS
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:41.079 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:f2:ae 10.100.0.8'], port_security=['fa:16:3e:03:f2:ae 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'aefb31cf-337d-446e-a617-c82e2e9b4809', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96e29220-0426-44b4-b5aa-c255f37e21b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5825b55787104735a580132059839426', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f2cedaad-1af4-4cf5-8d25-c0107ac10f73', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbe50a8a-95d0-45bf-b049-b26465cb5972, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=66fcb192-9003-491b-a694-25e0f0feccd1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:48:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:41.080 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 66fcb192-9003-491b-a694-25e0f0feccd1 in datapath 96e29220-0426-44b4-b5aa-c255f37e21b7 unbound from our chassis#033[00m
Oct 10 23:48:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:41.081 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96e29220-0426-44b4-b5aa-c255f37e21b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:48:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:41.082 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3e443a-f964-4330-9cd8-0445b2d8b02f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:41.082 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7 namespace which is not needed anymore#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:41 np0005480824 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 10 23:48:41 np0005480824 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 5.928s CPU time.
Oct 10 23:48:41 np0005480824 systemd-machined[215071]: Machine qemu-4-instance-00000004 terminated.
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.251 2 INFO nova.virt.libvirt.driver [-] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Instance destroyed successfully.#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.252 2 DEBUG nova.objects.instance [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lazy-loading 'resources' on Instance uuid aefb31cf-337d-446e-a617-c82e2e9b4809 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.267 2 DEBUG nova.virt.libvirt.vif [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:48:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2075905536',display_name='tempest-VolumesActionsTest-instance-2075905536',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2075905536',id=4,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:48:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5825b55787104735a580132059839426',ramdisk_id='',reservation_id='r-2xi2qcn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-671828259',owner_user_name='tempest-VolumesActionsTest-671828259-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:48:36Z,user_data=None,user_id='2619b09d11614c958f6b7a5b9db7b922',uuid=aefb31cf-337d-446e-a617-c82e2e9b4809,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "66fcb192-9003-491b-a694-25e0f0feccd1", "address": "fa:16:3e:03:f2:ae", "network": {"id": "96e29220-0426-44b4-b5aa-c255f37e21b7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1345267436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5825b55787104735a580132059839426", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66fcb192-90", "ovs_interfaceid": "66fcb192-9003-491b-a694-25e0f0feccd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.268 2 DEBUG nova.network.os_vif_util [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Converting VIF {"id": "66fcb192-9003-491b-a694-25e0f0feccd1", "address": "fa:16:3e:03:f2:ae", "network": {"id": "96e29220-0426-44b4-b5aa-c255f37e21b7", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1345267436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5825b55787104735a580132059839426", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66fcb192-90", "ovs_interfaceid": "66fcb192-9003-491b-a694-25e0f0feccd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.270 2 DEBUG nova.network.os_vif_util [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:f2:ae,bridge_name='br-int',has_traffic_filtering=True,id=66fcb192-9003-491b-a694-25e0f0feccd1,network=Network(96e29220-0426-44b4-b5aa-c255f37e21b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66fcb192-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.271 2 DEBUG os_vif [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:f2:ae,bridge_name='br-int',has_traffic_filtering=True,id=66fcb192-9003-491b-a694-25e0f0feccd1,network=Network(96e29220-0426-44b4-b5aa-c255f37e21b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66fcb192-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.274 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66fcb192-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:41 np0005480824 neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7[272144]: [NOTICE]   (272148) : haproxy version is 2.8.14-c23fe91
Oct 10 23:48:41 np0005480824 neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7[272144]: [NOTICE]   (272148) : path to executable is /usr/sbin/haproxy
Oct 10 23:48:41 np0005480824 neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7[272144]: [WARNING]  (272148) : Exiting Master process...
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.283 2 INFO os_vif [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:f2:ae,bridge_name='br-int',has_traffic_filtering=True,id=66fcb192-9003-491b-a694-25e0f0feccd1,network=Network(96e29220-0426-44b4-b5aa-c255f37e21b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66fcb192-90')#033[00m
Oct 10 23:48:41 np0005480824 neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7[272144]: [ALERT]    (272148) : Current worker (272150) exited with code 143 (Terminated)
Oct 10 23:48:41 np0005480824 neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7[272144]: [WARNING]  (272148) : All workers exited. Exiting... (0)
Oct 10 23:48:41 np0005480824 systemd[1]: libpod-a66dd4630433d2d0796adcc8cec0fb69045665cf8b85c3cfd9f06c04ac42385a.scope: Deactivated successfully.
Oct 10 23:48:41 np0005480824 podman[272302]: 2025-10-11 03:48:41.294471029 +0000 UTC m=+0.076603298 container died a66dd4630433d2d0796adcc8cec0fb69045665cf8b85c3cfd9f06c04ac42385a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:48:41 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a66dd4630433d2d0796adcc8cec0fb69045665cf8b85c3cfd9f06c04ac42385a-userdata-shm.mount: Deactivated successfully.
Oct 10 23:48:41 np0005480824 systemd[1]: var-lib-containers-storage-overlay-331b4600df57944982afb8c6b6b676ae78e65e3481f51d77a0fd1b8f615f1c08-merged.mount: Deactivated successfully.
Oct 10 23:48:41 np0005480824 podman[272302]: 2025-10-11 03:48:41.354871414 +0000 UTC m=+0.137003683 container cleanup a66dd4630433d2d0796adcc8cec0fb69045665cf8b85c3cfd9f06c04ac42385a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 10 23:48:41 np0005480824 systemd[1]: libpod-conmon-a66dd4630433d2d0796adcc8cec0fb69045665cf8b85c3cfd9f06c04ac42385a.scope: Deactivated successfully.
Oct 10 23:48:41 np0005480824 podman[272357]: 2025-10-11 03:48:41.468514354 +0000 UTC m=+0.077840286 container remove a66dd4630433d2d0796adcc8cec0fb69045665cf8b85c3cfd9f06c04ac42385a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:48:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:41.479 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[22f5188b-dc96-4037-a745-a7d2bef50d67]: (4, ('Sat Oct 11 03:48:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7 (a66dd4630433d2d0796adcc8cec0fb69045665cf8b85c3cfd9f06c04ac42385a)\na66dd4630433d2d0796adcc8cec0fb69045665cf8b85c3cfd9f06c04ac42385a\nSat Oct 11 03:48:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7 (a66dd4630433d2d0796adcc8cec0fb69045665cf8b85c3cfd9f06c04ac42385a)\na66dd4630433d2d0796adcc8cec0fb69045665cf8b85c3cfd9f06c04ac42385a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:41.482 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[be5311f6-cfd2-4533-856e-38eaab26653e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:41.487 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96e29220-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:41 np0005480824 kernel: tap96e29220-00: left promiscuous mode
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:41.569 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7f47d11f-abbf-4f41-81f1-12beea437900]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:41.592 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[310aef48-a1a9-49cc-88e7-8359d7c40e01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:41.593 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[39ab4dbc-3042-45eb-b230-d7f6c0a8d27c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:41.622 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b20c34e0-efa6-4776-88b0-f9b38fe949b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398241, 'reachable_time': 36019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272372, 'error': None, 'target': 'ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:41.628 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-96e29220-0426-44b4-b5aa-c255f37e21b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:48:41 np0005480824 systemd[1]: run-netns-ovnmeta\x2d96e29220\x2d0426\x2d44b4\x2db5aa\x2dc255f37e21b7.mount: Deactivated successfully.
Oct 10 23:48:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:41.628 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d053d4-2608-4898-b8a2-6716370fb815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.786 2 INFO nova.virt.libvirt.driver [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Deleting instance files /var/lib/nova/instances/aefb31cf-337d-446e-a617-c82e2e9b4809_del#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.786 2 INFO nova.virt.libvirt.driver [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Deletion of /var/lib/nova/instances/aefb31cf-337d-446e-a617-c82e2e9b4809_del complete#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.845 2 INFO nova.compute.manager [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.847 2 DEBUG oslo.service.loopingcall [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.847 2 DEBUG nova.compute.manager [-] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:48:41 np0005480824 nova_compute[260089]: 2025-10-11 03:48:41.847 2 DEBUG nova.network.neutron [-] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.263 2 DEBUG nova.compute.manager [req-d16d0014-92d2-4d0d-9467-944c92960241 req-8d6b29cb-2d5d-4fcf-bed2-1994613ee559 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Received event network-vif-unplugged-66fcb192-9003-491b-a694-25e0f0feccd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.263 2 DEBUG oslo_concurrency.lockutils [req-d16d0014-92d2-4d0d-9467-944c92960241 req-8d6b29cb-2d5d-4fcf-bed2-1994613ee559 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.264 2 DEBUG oslo_concurrency.lockutils [req-d16d0014-92d2-4d0d-9467-944c92960241 req-8d6b29cb-2d5d-4fcf-bed2-1994613ee559 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.264 2 DEBUG oslo_concurrency.lockutils [req-d16d0014-92d2-4d0d-9467-944c92960241 req-8d6b29cb-2d5d-4fcf-bed2-1994613ee559 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.265 2 DEBUG nova.compute.manager [req-d16d0014-92d2-4d0d-9467-944c92960241 req-8d6b29cb-2d5d-4fcf-bed2-1994613ee559 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] No waiting events found dispatching network-vif-unplugged-66fcb192-9003-491b-a694-25e0f0feccd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.265 2 DEBUG nova.compute.manager [req-d16d0014-92d2-4d0d-9467-944c92960241 req-8d6b29cb-2d5d-4fcf-bed2-1994613ee559 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Received event network-vif-unplugged-66fcb192-9003-491b-a694-25e0f0feccd1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.266 2 DEBUG nova.compute.manager [req-d16d0014-92d2-4d0d-9467-944c92960241 req-8d6b29cb-2d5d-4fcf-bed2-1994613ee559 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Received event network-vif-plugged-66fcb192-9003-491b-a694-25e0f0feccd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.266 2 DEBUG oslo_concurrency.lockutils [req-d16d0014-92d2-4d0d-9467-944c92960241 req-8d6b29cb-2d5d-4fcf-bed2-1994613ee559 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.267 2 DEBUG oslo_concurrency.lockutils [req-d16d0014-92d2-4d0d-9467-944c92960241 req-8d6b29cb-2d5d-4fcf-bed2-1994613ee559 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.267 2 DEBUG oslo_concurrency.lockutils [req-d16d0014-92d2-4d0d-9467-944c92960241 req-8d6b29cb-2d5d-4fcf-bed2-1994613ee559 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.268 2 DEBUG nova.compute.manager [req-d16d0014-92d2-4d0d-9467-944c92960241 req-8d6b29cb-2d5d-4fcf-bed2-1994613ee559 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] No waiting events found dispatching network-vif-plugged-66fcb192-9003-491b-a694-25e0f0feccd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.268 2 WARNING nova.compute.manager [req-d16d0014-92d2-4d0d-9467-944c92960241 req-8d6b29cb-2d5d-4fcf-bed2-1994613ee559 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Received unexpected event network-vif-plugged-66fcb192-9003-491b-a694-25e0f0feccd1 for instance with vm_state active and task_state deleting.#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.386 2 DEBUG nova.network.neutron [-] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.414 2 INFO nova.compute.manager [-] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Took 0.57 seconds to deallocate network for instance.#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.468 2 DEBUG nova.compute.manager [req-2b0a75e8-4a16-46e9-a692-79640d91db7b req-8675ef8b-cf77-456c-bb93-0530dc771c0b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Received event network-vif-deleted-66fcb192-9003-491b-a694-25e0f0feccd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.481 2 DEBUG oslo_concurrency.lockutils [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.481 2 DEBUG oslo_concurrency.lockutils [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:42 np0005480824 nova_compute[260089]: 2025-10-11 03:48:42.537 2 DEBUG oslo_concurrency.processutils [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 330 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 100 MiB/s wr, 554 op/s
Oct 10 23:48:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:48:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2552301500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:48:43 np0005480824 nova_compute[260089]: 2025-10-11 03:48:43.050 2 DEBUG oslo_concurrency.processutils [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:43 np0005480824 nova_compute[260089]: 2025-10-11 03:48:43.060 2 DEBUG nova.compute.provider_tree [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:48:43 np0005480824 nova_compute[260089]: 2025-10-11 03:48:43.084 2 DEBUG nova.scheduler.client.report [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:48:43 np0005480824 nova_compute[260089]: 2025-10-11 03:48:43.109 2 DEBUG oslo_concurrency.lockutils [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:43 np0005480824 nova_compute[260089]: 2025-10-11 03:48:43.131 2 INFO nova.scheduler.client.report [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Deleted allocations for instance aefb31cf-337d-446e-a617-c82e2e9b4809#033[00m
Oct 10 23:48:43 np0005480824 nova_compute[260089]: 2025-10-11 03:48:43.193 2 DEBUG oslo_concurrency.lockutils [None req-351f51aa-d81a-48fc-a806-dc646ee59daa 2619b09d11614c958f6b7a5b9db7b922 5825b55787104735a580132059839426 - - default default] Lock "aefb31cf-337d-446e-a617-c82e2e9b4809" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:43 np0005480824 nova_compute[260089]: 2025-10-11 03:48:43.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:48:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 98 MiB/s wr, 564 op/s
Oct 10 23:48:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:48:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/445633857' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:48:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:48:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/445633857' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:48:46 np0005480824 nova_compute[260089]: 2025-10-11 03:48:46.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 98 MiB/s wr, 564 op/s
Oct 10 23:48:48 np0005480824 nova_compute[260089]: 2025-10-11 03:48:48.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 296 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 MiB/s wr, 336 op/s
Oct 10 23:48:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:48:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1066096293' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:48:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:48:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1066096293' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:48:49 np0005480824 podman[272395]: 2025-10-11 03:48:49.071042468 +0000 UTC m=+0.103417940 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:48:49 np0005480824 podman[272396]: 2025-10-11 03:48:49.071211522 +0000 UTC m=+0.103392400 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 10 23:48:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:48:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct 10 23:48:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct 10 23:48:49 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct 10 23:48:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 296 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 MiB/s wr, 336 op/s
Oct 10 23:48:51 np0005480824 nova_compute[260089]: 2025-10-11 03:48:51.327 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:51 np0005480824 nova_compute[260089]: 2025-10-11 03:48:51.327 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:51 np0005480824 nova_compute[260089]: 2025-10-11 03:48:51.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:51 np0005480824 nova_compute[260089]: 2025-10-11 03:48:51.344 2 DEBUG nova.compute.manager [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:48:51 np0005480824 nova_compute[260089]: 2025-10-11 03:48:51.416 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:51 np0005480824 nova_compute[260089]: 2025-10-11 03:48:51.417 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:51 np0005480824 nova_compute[260089]: 2025-10-11 03:48:51.426 2 DEBUG nova.virt.hardware [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:48:51 np0005480824 nova_compute[260089]: 2025-10-11 03:48:51.427 2 INFO nova.compute.claims [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:48:51 np0005480824 nova_compute[260089]: 2025-10-11 03:48:51.552 2 DEBUG oslo_concurrency.processutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:48:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396853047' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:48:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:48:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396853047' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:48:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:48:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1689499997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:48:51 np0005480824 nova_compute[260089]: 2025-10-11 03:48:51.981 2 DEBUG oslo_concurrency.processutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:51 np0005480824 nova_compute[260089]: 2025-10-11 03:48:51.992 2 DEBUG nova.compute.provider_tree [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.015 2 DEBUG nova.scheduler.client.report [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.044 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.046 2 DEBUG nova.compute.manager [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.112 2 DEBUG nova.compute.manager [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.113 2 DEBUG nova.network.neutron [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.137 2 INFO nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.158 2 DEBUG nova.compute.manager [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.261 2 DEBUG nova.compute.manager [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.264 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.265 2 INFO nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Creating image(s)#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.303 2 DEBUG nova.storage.rbd_utils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] rbd image c44627c6-7bd8-4e1a-b32f-a79f70a179c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.348 2 DEBUG nova.storage.rbd_utils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] rbd image c44627c6-7bd8-4e1a-b32f-a79f70a179c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.382 2 DEBUG nova.storage.rbd_utils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] rbd image c44627c6-7bd8-4e1a-b32f-a79f70a179c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.388 2 DEBUG oslo_concurrency.processutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.463 2 DEBUG oslo_concurrency.processutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.465 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.465 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.466 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.497 2 DEBUG nova.storage.rbd_utils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] rbd image c44627c6-7bd8-4e1a-b32f-a79f70a179c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.503 2 DEBUG oslo_concurrency.processutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e c44627c6-7bd8-4e1a-b32f-a79f70a179c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 504 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 44 MiB/s wr, 207 op/s
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.615 2 DEBUG nova.policy [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd2bb1c00b7ba4686bb710314548ea5af', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '633027d5948949cdb842dbb20e321e57', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.818 2 DEBUG oslo_concurrency.processutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e c44627c6-7bd8-4e1a-b32f-a79f70a179c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.902 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154517.890264, 1c478ad7-214b-4e9c-be93-5b836a13b5f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.903 2 INFO nova.compute.manager [-] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.913 2 DEBUG nova.storage.rbd_utils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] resizing rbd image c44627c6-7bd8-4e1a-b32f-a79f70a179c7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 23:48:52 np0005480824 nova_compute[260089]: 2025-10-11 03:48:52.956 2 DEBUG nova.compute.manager [None req-20b1b7dc-d3db-46a8-bc85-e44106201071 - - - - - -] [instance: 1c478ad7-214b-4e9c-be93-5b836a13b5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:53 np0005480824 nova_compute[260089]: 2025-10-11 03:48:53.028 2 DEBUG nova.objects.instance [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lazy-loading 'migration_context' on Instance uuid c44627c6-7bd8-4e1a-b32f-a79f70a179c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:48:53 np0005480824 nova_compute[260089]: 2025-10-11 03:48:53.043 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:48:53 np0005480824 nova_compute[260089]: 2025-10-11 03:48:53.044 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Ensure instance console log exists: /var/lib/nova/instances/c44627c6-7bd8-4e1a-b32f-a79f70a179c7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:48:53 np0005480824 nova_compute[260089]: 2025-10-11 03:48:53.045 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:53 np0005480824 nova_compute[260089]: 2025-10-11 03:48:53.046 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:53 np0005480824 nova_compute[260089]: 2025-10-11 03:48:53.047 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:53 np0005480824 nova_compute[260089]: 2025-10-11 03:48:53.224 2 DEBUG nova.network.neutron [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Successfully created port: 55e4f905-eb1b-4b14-ab56-c88a38fe3b3d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:48:53 np0005480824 nova_compute[260089]: 2025-10-11 03:48:53.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:48:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 608 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 54 MiB/s wr, 184 op/s
Oct 10 23:48:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:48:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3139784716' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:48:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:48:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3139784716' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:48:54 np0005480824 nova_compute[260089]: 2025-10-11 03:48:54.819 2 DEBUG nova.network.neutron [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Successfully updated port: 55e4f905-eb1b-4b14-ab56-c88a38fe3b3d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:48:54 np0005480824 nova_compute[260089]: 2025-10-11 03:48:54.833 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "refresh_cache-c44627c6-7bd8-4e1a-b32f-a79f70a179c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:48:54 np0005480824 nova_compute[260089]: 2025-10-11 03:48:54.834 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquired lock "refresh_cache-c44627c6-7bd8-4e1a-b32f-a79f70a179c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:48:54 np0005480824 nova_compute[260089]: 2025-10-11 03:48:54.834 2 DEBUG nova.network.neutron [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:48:54 np0005480824 nova_compute[260089]: 2025-10-11 03:48:54.933 2 DEBUG nova.compute.manager [req-71eb2c55-8174-4b66-aed2-a32d9dd3430d req-3cf5ada6-16c4-44a6-ae28-1b0cd710905a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Received event network-changed-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:54 np0005480824 nova_compute[260089]: 2025-10-11 03:48:54.933 2 DEBUG nova.compute.manager [req-71eb2c55-8174-4b66-aed2-a32d9dd3430d req-3cf5ada6-16c4-44a6-ae28-1b0cd710905a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Refreshing instance network info cache due to event network-changed-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:48:54 np0005480824 nova_compute[260089]: 2025-10-11 03:48:54.934 2 DEBUG oslo_concurrency.lockutils [req-71eb2c55-8174-4b66-aed2-a32d9dd3430d req-3cf5ada6-16c4-44a6-ae28-1b0cd710905a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-c44627c6-7bd8-4e1a-b32f-a79f70a179c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:48:54 np0005480824 nova_compute[260089]: 2025-10-11 03:48:54.977 2 DEBUG nova.network.neutron [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.862 2 DEBUG nova.network.neutron [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Updating instance_info_cache with network_info: [{"id": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "address": "fa:16:3e:ba:24:f3", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55e4f905-eb", "ovs_interfaceid": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.890 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Releasing lock "refresh_cache-c44627c6-7bd8-4e1a-b32f-a79f70a179c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.890 2 DEBUG nova.compute.manager [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Instance network_info: |[{"id": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "address": "fa:16:3e:ba:24:f3", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55e4f905-eb", "ovs_interfaceid": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.891 2 DEBUG oslo_concurrency.lockutils [req-71eb2c55-8174-4b66-aed2-a32d9dd3430d req-3cf5ada6-16c4-44a6-ae28-1b0cd710905a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-c44627c6-7bd8-4e1a-b32f-a79f70a179c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.891 2 DEBUG nova.network.neutron [req-71eb2c55-8174-4b66-aed2-a32d9dd3430d req-3cf5ada6-16c4-44a6-ae28-1b0cd710905a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Refreshing network info cache for port 55e4f905-eb1b-4b14-ab56-c88a38fe3b3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.895 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Start _get_guest_xml network_info=[{"id": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "address": "fa:16:3e:ba:24:f3", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55e4f905-eb", "ovs_interfaceid": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.901 2 WARNING nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.906 2 DEBUG nova.virt.libvirt.host [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.906 2 DEBUG nova.virt.libvirt.host [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.910 2 DEBUG nova.virt.libvirt.host [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.910 2 DEBUG nova.virt.libvirt.host [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.911 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.911 2 DEBUG nova.virt.hardware [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.912 2 DEBUG nova.virt.hardware [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.912 2 DEBUG nova.virt.hardware [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.912 2 DEBUG nova.virt.hardware [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.912 2 DEBUG nova.virt.hardware [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.913 2 DEBUG nova.virt.hardware [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.913 2 DEBUG nova.virt.hardware [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.914 2 DEBUG nova.virt.hardware [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.914 2 DEBUG nova.virt.hardware [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.915 2 DEBUG nova.virt.hardware [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.915 2 DEBUG nova.virt.hardware [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:48:55 np0005480824 nova_compute[260089]: 2025-10-11 03:48:55.920 2 DEBUG oslo_concurrency.processutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:56 np0005480824 nova_compute[260089]: 2025-10-11 03:48:56.248 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154521.2460647, aefb31cf-337d-446e-a617-c82e2e9b4809 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:48:56 np0005480824 nova_compute[260089]: 2025-10-11 03:48:56.248 2 INFO nova.compute.manager [-] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:48:56 np0005480824 nova_compute[260089]: 2025-10-11 03:48:56.285 2 DEBUG nova.compute.manager [None req-15387c4e-9c1d-412b-8b23-cd143901f7e4 - - - - - -] [instance: aefb31cf-337d-446e-a617-c82e2e9b4809] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:48:56 np0005480824 nova_compute[260089]: 2025-10-11 03:48:56.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:48:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/809668750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:48:56 np0005480824 nova_compute[260089]: 2025-10-11 03:48:56.499 2 DEBUG oslo_concurrency.processutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:56 np0005480824 nova_compute[260089]: 2025-10-11 03:48:56.534 2 DEBUG nova.storage.rbd_utils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] rbd image c44627c6-7bd8-4e1a-b32f-a79f70a179c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:56 np0005480824 nova_compute[260089]: 2025-10-11 03:48:56.539 2 DEBUG oslo_concurrency.processutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 608 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 54 MiB/s wr, 184 op/s
Oct 10 23:48:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:48:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030675878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.082 2 DEBUG oslo_concurrency.processutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.085 2 DEBUG nova.virt.libvirt.vif [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:48:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1672646305',display_name='tempest-VolumesSnapshotTestJSON-instance-1672646305',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1672646305',id=5,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGjSEbonl6tZjtw1C/AABmPqvkeq5PpV9hTO7gHpXSefMJGvfTYI4QKUmM4JngFk81DlCC3Tw4aEvHRSSap1ox2HtHhGxo+WU8LAjNe6fep/hQC/OtxUQtC8mrtkIIwvag==',key_name='tempest-keypair-2023164038',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='633027d5948949cdb842dbb20e321e57',ramdisk_id='',reservation_id='r-gfwbfpiz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-62208921',owner_user_name='tempest-VolumesSnapshotTestJSON-62208921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:48:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d2bb1c00b7ba4686bb710314548ea5af',uuid=c44627c6-7bd8-4e1a-b32f-a79f70a179c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "address": "fa:16:3e:ba:24:f3", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55e4f905-eb", "ovs_interfaceid": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.086 2 DEBUG nova.network.os_vif_util [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Converting VIF {"id": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "address": "fa:16:3e:ba:24:f3", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55e4f905-eb", "ovs_interfaceid": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.088 2 DEBUG nova.network.os_vif_util [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:24:f3,bridge_name='br-int',has_traffic_filtering=True,id=55e4f905-eb1b-4b14-ab56-c88a38fe3b3d,network=Network(ea784d9f-5fea-4b2f-8a0a-4232f32d0fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55e4f905-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.090 2 DEBUG nova.objects.instance [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lazy-loading 'pci_devices' on Instance uuid c44627c6-7bd8-4e1a-b32f-a79f70a179c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.116 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  <uuid>c44627c6-7bd8-4e1a-b32f-a79f70a179c7</uuid>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  <name>instance-00000005</name>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <nova:name>tempest-VolumesSnapshotTestJSON-instance-1672646305</nova:name>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:48:55</nova:creationTime>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:        <nova:user uuid="d2bb1c00b7ba4686bb710314548ea5af">tempest-VolumesSnapshotTestJSON-62208921-project-member</nova:user>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:        <nova:project uuid="633027d5948949cdb842dbb20e321e57">tempest-VolumesSnapshotTestJSON-62208921</nova:project>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:        <nova:port uuid="55e4f905-eb1b-4b14-ab56-c88a38fe3b3d">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <entry name="serial">c44627c6-7bd8-4e1a-b32f-a79f70a179c7</entry>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <entry name="uuid">c44627c6-7bd8-4e1a-b32f-a79f70a179c7</entry>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/c44627c6-7bd8-4e1a-b32f-a79f70a179c7_disk">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/c44627c6-7bd8-4e1a-b32f-a79f70a179c7_disk.config">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:ba:24:f3"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <target dev="tap55e4f905-eb"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/c44627c6-7bd8-4e1a-b32f-a79f70a179c7/console.log" append="off"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:48:57 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:48:57 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:48:57 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:48:57 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.117 2 DEBUG nova.compute.manager [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Preparing to wait for external event network-vif-plugged-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.118 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.118 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.118 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.119 2 DEBUG nova.virt.libvirt.vif [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:48:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1672646305',display_name='tempest-VolumesSnapshotTestJSON-instance-1672646305',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1672646305',id=5,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGjSEbonl6tZjtw1C/AABmPqvkeq5PpV9hTO7gHpXSefMJGvfTYI4QKUmM4JngFk81DlCC3Tw4aEvHRSSap1ox2HtHhGxo+WU8LAjNe6fep/hQC/OtxUQtC8mrtkIIwvag==',key_name='tempest-keypair-2023164038',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='633027d5948949cdb842dbb20e321e57',ramdisk_id='',reservation_id='r-gfwbfpiz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-62208921',owner_user_name='tempest-VolumesSnapshotTestJSON-62208921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:48:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d2bb1c00b7ba4686bb710314548ea5af',uuid=c44627c6-7bd8-4e1a-b32f-a79f70a179c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "address": "fa:16:3e:ba:24:f3", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55e4f905-eb", "ovs_interfaceid": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.120 2 DEBUG nova.network.os_vif_util [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Converting VIF {"id": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "address": "fa:16:3e:ba:24:f3", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55e4f905-eb", "ovs_interfaceid": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.121 2 DEBUG nova.network.os_vif_util [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:24:f3,bridge_name='br-int',has_traffic_filtering=True,id=55e4f905-eb1b-4b14-ab56-c88a38fe3b3d,network=Network(ea784d9f-5fea-4b2f-8a0a-4232f32d0fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55e4f905-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.121 2 DEBUG os_vif [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:24:f3,bridge_name='br-int',has_traffic_filtering=True,id=55e4f905-eb1b-4b14-ab56-c88a38fe3b3d,network=Network(ea784d9f-5fea-4b2f-8a0a-4232f32d0fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55e4f905-eb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.123 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.124 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.129 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55e4f905-eb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.130 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap55e4f905-eb, col_values=(('external_ids', {'iface-id': '55e4f905-eb1b-4b14-ab56-c88a38fe3b3d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:24:f3', 'vm-uuid': 'c44627c6-7bd8-4e1a-b32f-a79f70a179c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:57 np0005480824 NetworkManager[44969]: <info>  [1760154537.1329] manager: (tap55e4f905-eb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.143 2 INFO os_vif [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:24:f3,bridge_name='br-int',has_traffic_filtering=True,id=55e4f905-eb1b-4b14-ab56-c88a38fe3b3d,network=Network(ea784d9f-5fea-4b2f-8a0a-4232f32d0fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55e4f905-eb')#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.242 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.243 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.243 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] No VIF found with MAC fa:16:3e:ba:24:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.244 2 INFO nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Using config drive#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.294 2 DEBUG nova.storage.rbd_utils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] rbd image c44627c6-7bd8-4e1a-b32f-a79f70a179c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:57 np0005480824 podman[272690]: 2025-10-11 03:48:57.338226679 +0000 UTC m=+0.145159494 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.451 2 DEBUG nova.network.neutron [req-71eb2c55-8174-4b66-aed2-a32d9dd3430d req-3cf5ada6-16c4-44a6-ae28-1b0cd710905a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Updated VIF entry in instance network info cache for port 55e4f905-eb1b-4b14-ab56-c88a38fe3b3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.452 2 DEBUG nova.network.neutron [req-71eb2c55-8174-4b66-aed2-a32d9dd3430d req-3cf5ada6-16c4-44a6-ae28-1b0cd710905a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Updating instance_info_cache with network_info: [{"id": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "address": "fa:16:3e:ba:24:f3", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55e4f905-eb", "ovs_interfaceid": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.469 2 DEBUG oslo_concurrency.lockutils [req-71eb2c55-8174-4b66-aed2-a32d9dd3430d req-3cf5ada6-16c4-44a6-ae28-1b0cd710905a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-c44627c6-7bd8-4e1a-b32f-a79f70a179c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.823 2 INFO nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Creating config drive at /var/lib/nova/instances/c44627c6-7bd8-4e1a-b32f-a79f70a179c7/disk.config#033[00m
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.840 2 DEBUG oslo_concurrency.processutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c44627c6-7bd8-4e1a-b32f-a79f70a179c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv52i9asx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:48:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:48:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:48:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:48:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:48:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:48:57 np0005480824 nova_compute[260089]: 2025-10-11 03:48:57.991 2 DEBUG oslo_concurrency.processutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c44627c6-7bd8-4e1a-b32f-a79f70a179c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv52i9asx" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.035 2 DEBUG nova.storage.rbd_utils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] rbd image c44627c6-7bd8-4e1a-b32f-a79f70a179c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.041 2 DEBUG oslo_concurrency.processutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c44627c6-7bd8-4e1a-b32f-a79f70a179c7/disk.config c44627c6-7bd8-4e1a-b32f-a79f70a179c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.256 2 DEBUG oslo_concurrency.processutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c44627c6-7bd8-4e1a-b32f-a79f70a179c7/disk.config c44627c6-7bd8-4e1a-b32f-a79f70a179c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.257 2 INFO nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Deleting local config drive /var/lib/nova/instances/c44627c6-7bd8-4e1a-b32f-a79f70a179c7/disk.config because it was imported into RBD.#033[00m
Oct 10 23:48:58 np0005480824 NetworkManager[44969]: <info>  [1760154538.3168] manager: (tap55e4f905-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct 10 23:48:58 np0005480824 kernel: tap55e4f905-eb: entered promiscuous mode
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:58 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:58Z|00060|binding|INFO|Claiming lport 55e4f905-eb1b-4b14-ab56-c88a38fe3b3d for this chassis.
Oct 10 23:48:58 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:58Z|00061|binding|INFO|55e4f905-eb1b-4b14-ab56-c88a38fe3b3d: Claiming fa:16:3e:ba:24:f3 10.100.0.14
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.338 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:24:f3 10.100.0.14'], port_security=['fa:16:3e:ba:24:f3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c44627c6-7bd8-4e1a-b32f-a79f70a179c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '633027d5948949cdb842dbb20e321e57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5471dc17-cb49-4ef7-8622-745d4a93a7ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8373a8b6-48b7-4c53-8c59-c606fca3db1d, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=55e4f905-eb1b-4b14-ab56-c88a38fe3b3d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.339 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 55e4f905-eb1b-4b14-ab56-c88a38fe3b3d in datapath ea784d9f-5fea-4b2f-8a0a-4232f32d0fff bound to our chassis#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.341 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea784d9f-5fea-4b2f-8a0a-4232f32d0fff#033[00m
Oct 10 23:48:58 np0005480824 systemd-udevd[272788]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.361 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[85c60c32-62bb-4973-8d20-b7da3dd4176f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.363 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapea784d9f-51 in ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:48:58 np0005480824 systemd-machined[215071]: New machine qemu-5-instance-00000005.
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.365 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapea784d9f-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.365 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b92b2d28-b452-44fe-905e-6c7abc1966b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.366 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a7d876-0286-469c-8771-7ac58a34a2dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 NetworkManager[44969]: <info>  [1760154538.3844] device (tap55e4f905-eb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:48:58 np0005480824 NetworkManager[44969]: <info>  [1760154538.3862] device (tap55e4f905-eb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:48:58 np0005480824 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Oct 10 23:48:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:48:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2103900778' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.393 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0560c8-7a84-494e-8a2e-ce1eae1c1223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:48:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2103900778' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:58 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:58Z|00062|binding|INFO|Setting lport 55e4f905-eb1b-4b14-ab56-c88a38fe3b3d ovn-installed in OVS
Oct 10 23:48:58 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:58Z|00063|binding|INFO|Setting lport 55e4f905-eb1b-4b14-ab56-c88a38fe3b3d up in Southbound
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.424 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[738db814-bd70-4ac9-b55f-05cf0d16d227]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.457 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[4eff28d3-6dbf-42a0-979b-84bc910a62d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.462 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9edc476c-098a-4ba5-916d-be988291b25c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 NetworkManager[44969]: <info>  [1760154538.4629] manager: (tapea784d9f-50): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct 10 23:48:58 np0005480824 systemd-udevd[272791]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.506 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[123c2aa5-19fc-4c00-a247-700cf4362778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.509 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a7f68b-d5fc-4a8d-b3cd-d6bb1ae5531a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 NetworkManager[44969]: <info>  [1760154538.5338] device (tapea784d9f-50): carrier: link connected
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.546 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[8c584eca-199a-4e79-bf61-faaf8bc7602e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.569 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e1852ed1-bfef-438d-bc2d-091541a88976]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea784d9f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:b1:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400619, 'reachable_time': 23565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272820, 'error': None, 'target': 'ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.583 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab564ba-e0d5-4122-adb7-9f4cd515c8d2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee3:b16f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400619, 'tstamp': 400619}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272822, 'error': None, 'target': 'ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 950 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 63 MiB/s wr, 157 op/s
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.604 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[15bb6969-28c8-4225-9a1b-64668fa03edc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea784d9f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:b1:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400619, 'reachable_time': 23565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272823, 'error': None, 'target': 'ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.636 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[31f63db7-7c54-4de0-9045-e5d93392642b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.683 2 DEBUG nova.compute.manager [req-6a3b1792-1e70-4ccd-951e-544c6025d26f req-07c5b911-0e9f-42e5-ba2c-a1013156e437 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Received event network-vif-plugged-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.684 2 DEBUG oslo_concurrency.lockutils [req-6a3b1792-1e70-4ccd-951e-544c6025d26f req-07c5b911-0e9f-42e5-ba2c-a1013156e437 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.684 2 DEBUG oslo_concurrency.lockutils [req-6a3b1792-1e70-4ccd-951e-544c6025d26f req-07c5b911-0e9f-42e5-ba2c-a1013156e437 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.685 2 DEBUG oslo_concurrency.lockutils [req-6a3b1792-1e70-4ccd-951e-544c6025d26f req-07c5b911-0e9f-42e5-ba2c-a1013156e437 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.685 2 DEBUG nova.compute.manager [req-6a3b1792-1e70-4ccd-951e-544c6025d26f req-07c5b911-0e9f-42e5-ba2c-a1013156e437 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Processing event network-vif-plugged-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.696 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1d29f3b5-9d0b-4a08-b80d-04e468f589be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.697 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea784d9f-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.698 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.698 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea784d9f-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:58 np0005480824 NetworkManager[44969]: <info>  [1760154538.7010] manager: (tapea784d9f-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct 10 23:48:58 np0005480824 kernel: tapea784d9f-50: entered promiscuous mode
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.705 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea784d9f-50, col_values=(('external_ids', {'iface-id': 'd4ae273e-67ce-457d-b09c-bdc58cb85b9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:58 np0005480824 ovn_controller[152667]: 2025-10-11T03:48:58Z|00064|binding|INFO|Releasing lport d4ae273e-67ce-457d-b09c-bdc58cb85b9a from this chassis (sb_readonly=0)
Oct 10 23:48:58 np0005480824 nova_compute[260089]: 2025-10-11 03:48:58.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.725 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ea784d9f-5fea-4b2f-8a0a-4232f32d0fff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ea784d9f-5fea-4b2f-8a0a-4232f32d0fff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.726 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[73376ae7-7a18-4a8d-b0e0-cdb55673610b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.727 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/ea784d9f-5fea-4b2f-8a0a-4232f32d0fff.pid.haproxy
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID ea784d9f-5fea-4b2f-8a0a-4232f32d0fff
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:48:58 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:48:58.727 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'env', 'PROCESS_TAG=haproxy-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ea784d9f-5fea-4b2f-8a0a-4232f32d0fff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:48:59 np0005480824 podman[272855]: 2025-10-11 03:48:59.223512829 +0000 UTC m=+0.086497222 container create 1b52ff80222ba0400e3b1c1b6eed85363af90dd1dae8e9e1b97acce16fb2444d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:48:59 np0005480824 podman[272855]: 2025-10-11 03:48:59.18287568 +0000 UTC m=+0.045860113 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:48:59 np0005480824 systemd[1]: Started libpod-conmon-1b52ff80222ba0400e3b1c1b6eed85363af90dd1dae8e9e1b97acce16fb2444d.scope.
Oct 10 23:48:59 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:48:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d9ebb4bb005ca34de47d84318748148eda7b0ad425ee23099cc8c280fb9c89b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:48:59 np0005480824 podman[272855]: 2025-10-11 03:48:59.37113223 +0000 UTC m=+0.234116623 container init 1b52ff80222ba0400e3b1c1b6eed85363af90dd1dae8e9e1b97acce16fb2444d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 10 23:48:59 np0005480824 podman[272855]: 2025-10-11 03:48:59.381650579 +0000 UTC m=+0.244634912 container start 1b52ff80222ba0400e3b1c1b6eed85363af90dd1dae8e9e1b97acce16fb2444d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 23:48:59 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[272870]: [NOTICE]   (272874) : New worker (272876) forked
Oct 10 23:48:59 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[272870]: [NOTICE]   (272874) : Loading success.
Oct 10 23:48:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.299 2 DEBUG nova.compute.manager [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.301 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154540.2987874, c44627c6-7bd8-4e1a-b32f-a79f70a179c7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.301 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] VM Started (Lifecycle Event)#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.311 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.317 2 INFO nova.virt.libvirt.driver [-] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Instance spawned successfully.#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.318 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.321 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.326 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.346 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.346 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.347 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.348 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.349 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.349 2 DEBUG nova.virt.libvirt.driver [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.355 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.356 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154540.2992415, c44627c6-7bd8-4e1a-b32f-a79f70a179c7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.356 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.387 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.391 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154540.311128, c44627c6-7bd8-4e1a-b32f-a79f70a179c7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.392 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.413 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.417 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.420 2 INFO nova.compute.manager [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Took 8.16 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.420 2 DEBUG nova.compute.manager [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.445 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.545 2 INFO nova.compute.manager [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Took 9.16 seconds to build instance.#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.572 2 DEBUG oslo_concurrency.lockutils [None req-7961b1ac-207d-4115-9eab-fd552827cf23 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 950 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 57 MiB/s wr, 142 op/s
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.771 2 DEBUG nova.compute.manager [req-ae4a50d1-31a8-4a50-a3aa-ff329aa38b87 req-0078c264-f227-4849-87ea-d04219ec2585 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Received event network-vif-plugged-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.771 2 DEBUG oslo_concurrency.lockutils [req-ae4a50d1-31a8-4a50-a3aa-ff329aa38b87 req-0078c264-f227-4849-87ea-d04219ec2585 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.772 2 DEBUG oslo_concurrency.lockutils [req-ae4a50d1-31a8-4a50-a3aa-ff329aa38b87 req-0078c264-f227-4849-87ea-d04219ec2585 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.772 2 DEBUG oslo_concurrency.lockutils [req-ae4a50d1-31a8-4a50-a3aa-ff329aa38b87 req-0078c264-f227-4849-87ea-d04219ec2585 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.772 2 DEBUG nova.compute.manager [req-ae4a50d1-31a8-4a50-a3aa-ff329aa38b87 req-0078c264-f227-4849-87ea-d04219ec2585 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] No waiting events found dispatching network-vif-plugged-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:49:00 np0005480824 nova_compute[260089]: 2025-10-11 03:49:00.773 2 WARNING nova.compute.manager [req-ae4a50d1-31a8-4a50-a3aa-ff329aa38b87 req-0078c264-f227-4849-87ea-d04219ec2585 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Received unexpected event network-vif-plugged-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d for instance with vm_state active and task_state None.#033[00m
Oct 10 23:49:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Oct 10 23:49:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Oct 10 23:49:01 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Oct 10 23:49:02 np0005480824 nova_compute[260089]: 2025-10-11 03:49:02.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 1.0 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 53 MiB/s wr, 180 op/s
Oct 10 23:49:03 np0005480824 nova_compute[260089]: 2025-10-11 03:49:03.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:03Z|00065|binding|INFO|Releasing lport d4ae273e-67ce-457d-b09c-bdc58cb85b9a from this chassis (sb_readonly=0)
Oct 10 23:49:03 np0005480824 nova_compute[260089]: 2025-10-11 03:49:03.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:04 np0005480824 podman[272927]: 2025-10-11 03:49:04.054628582 +0000 UTC m=+0.083276234 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 23:49:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:49:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 53 MiB/s wr, 218 op/s
Oct 10 23:49:04 np0005480824 NetworkManager[44969]: <info>  [1760154544.9127] manager: (patch-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct 10 23:49:04 np0005480824 nova_compute[260089]: 2025-10-11 03:49:04.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:04 np0005480824 NetworkManager[44969]: <info>  [1760154544.9145] manager: (patch-br-int-to-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 10 23:49:05 np0005480824 nova_compute[260089]: 2025-10-11 03:49:05.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:05 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:05Z|00066|binding|INFO|Releasing lport d4ae273e-67ce-457d-b09c-bdc58cb85b9a from this chassis (sb_readonly=0)
Oct 10 23:49:05 np0005480824 nova_compute[260089]: 2025-10-11 03:49:05.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:05 np0005480824 nova_compute[260089]: 2025-10-11 03:49:05.248 2 DEBUG nova.compute.manager [req-b909f2f8-2c77-4eea-b047-84d7996a2485 req-b2c4b233-d8e2-4f74-b2e8-0cc02510aebf 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Received event network-changed-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:49:05 np0005480824 nova_compute[260089]: 2025-10-11 03:49:05.248 2 DEBUG nova.compute.manager [req-b909f2f8-2c77-4eea-b047-84d7996a2485 req-b2c4b233-d8e2-4f74-b2e8-0cc02510aebf 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Refreshing instance network info cache due to event network-changed-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:49:05 np0005480824 nova_compute[260089]: 2025-10-11 03:49:05.249 2 DEBUG oslo_concurrency.lockutils [req-b909f2f8-2c77-4eea-b047-84d7996a2485 req-b2c4b233-d8e2-4f74-b2e8-0cc02510aebf 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-c44627c6-7bd8-4e1a-b32f-a79f70a179c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:49:05 np0005480824 nova_compute[260089]: 2025-10-11 03:49:05.249 2 DEBUG oslo_concurrency.lockutils [req-b909f2f8-2c77-4eea-b047-84d7996a2485 req-b2c4b233-d8e2-4f74-b2e8-0cc02510aebf 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-c44627c6-7bd8-4e1a-b32f-a79f70a179c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:49:05 np0005480824 nova_compute[260089]: 2025-10-11 03:49:05.249 2 DEBUG nova.network.neutron [req-b909f2f8-2c77-4eea-b047-84d7996a2485 req-b2c4b233-d8e2-4f74-b2e8-0cc02510aebf 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Refreshing network info cache for port 55e4f905-eb1b-4b14-ab56-c88a38fe3b3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:49:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Oct 10 23:49:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Oct 10 23:49:06 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Oct 10 23:49:06 np0005480824 nova_compute[260089]: 2025-10-11 03:49:06.421 2 DEBUG nova.network.neutron [req-b909f2f8-2c77-4eea-b047-84d7996a2485 req-b2c4b233-d8e2-4f74-b2e8-0cc02510aebf 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Updated VIF entry in instance network info cache for port 55e4f905-eb1b-4b14-ab56-c88a38fe3b3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:49:06 np0005480824 nova_compute[260089]: 2025-10-11 03:49:06.422 2 DEBUG nova.network.neutron [req-b909f2f8-2c77-4eea-b047-84d7996a2485 req-b2c4b233-d8e2-4f74-b2e8-0cc02510aebf 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Updating instance_info_cache with network_info: [{"id": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "address": "fa:16:3e:ba:24:f3", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55e4f905-eb", "ovs_interfaceid": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:49:06 np0005480824 nova_compute[260089]: 2025-10-11 03:49:06.443 2 DEBUG oslo_concurrency.lockutils [req-b909f2f8-2c77-4eea-b047-84d7996a2485 req-b2c4b233-d8e2-4f74-b2e8-0cc02510aebf 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-c44627c6-7bd8-4e1a-b32f-a79f70a179c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:49:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 26 MiB/s wr, 167 op/s
Oct 10 23:49:06 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:06Z|00067|binding|INFO|Releasing lport d4ae273e-67ce-457d-b09c-bdc58cb85b9a from this chassis (sb_readonly=0)
Oct 10 23:49:06 np0005480824 nova_compute[260089]: 2025-10-11 03:49:06.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:07 np0005480824 nova_compute[260089]: 2025-10-11 03:49:07.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:49:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2623527418' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:49:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:49:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2623527418' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:49:08 np0005480824 nova_compute[260089]: 2025-10-11 03:49:08.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 134 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 26 MiB/s wr, 221 op/s
Oct 10 23:49:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:49:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Oct 10 23:49:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Oct 10 23:49:09 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Oct 10 23:49:10 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:10Z|00068|binding|INFO|Releasing lport d4ae273e-67ce-457d-b09c-bdc58cb85b9a from this chassis (sb_readonly=0)
Oct 10 23:49:10 np0005480824 nova_compute[260089]: 2025-10-11 03:49:10.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:10.489 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:10.490 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:10.490 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 134 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 12 MiB/s wr, 106 op/s
Oct 10 23:49:12 np0005480824 nova_compute[260089]: 2025-10-11 03:49:12.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 149 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 2.0 MiB/s wr, 103 op/s
Oct 10 23:49:12 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:12Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:24:f3 10.100.0.14
Oct 10 23:49:12 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:12Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:24:f3 10.100.0.14
Oct 10 23:49:13 np0005480824 nova_compute[260089]: 2025-10-11 03:49:13.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:49:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1694679829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:49:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:49:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 159 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 2.9 MiB/s wr, 117 op/s
Oct 10 23:49:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Oct 10 23:49:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Oct 10 23:49:15 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Oct 10 23:49:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct 10 23:49:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct 10 23:49:16 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct 10 23:49:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 159 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 223 KiB/s rd, 3.5 MiB/s wr, 81 op/s
Oct 10 23:49:16 np0005480824 nova_compute[260089]: 2025-10-11 03:49:16.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:17 np0005480824 nova_compute[260089]: 2025-10-11 03:49:17.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:18 np0005480824 nova_compute[260089]: 2025-10-11 03:49:18.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 342 KiB/s rd, 3.2 MiB/s wr, 127 op/s
Oct 10 23:49:19 np0005480824 nova_compute[260089]: 2025-10-11 03:49:19.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:49:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:49:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3297980807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:49:20 np0005480824 podman[272947]: 2025-10-11 03:49:20.025860192 +0000 UTC m=+0.082448365 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 23:49:20 np0005480824 podman[272948]: 2025-10-11 03:49:20.025832182 +0000 UTC m=+0.081273438 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 23:49:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct 10 23:49:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct 10 23:49:20 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct 10 23:49:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 193 KiB/s rd, 191 KiB/s wr, 74 op/s
Oct 10 23:49:20 np0005480824 nova_compute[260089]: 2025-10-11 03:49:20.699 2 DEBUG oslo_concurrency.lockutils [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:20 np0005480824 nova_compute[260089]: 2025-10-11 03:49:20.699 2 DEBUG oslo_concurrency.lockutils [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:20 np0005480824 nova_compute[260089]: 2025-10-11 03:49:20.714 2 DEBUG nova.objects.instance [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lazy-loading 'flavor' on Instance uuid c44627c6-7bd8-4e1a-b32f-a79f70a179c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:49:20 np0005480824 nova_compute[260089]: 2025-10-11 03:49:20.735 2 INFO nova.virt.libvirt.driver [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Ignoring supplied device name: /dev/vdb#033[00m
Oct 10 23:49:20 np0005480824 nova_compute[260089]: 2025-10-11 03:49:20.750 2 DEBUG oslo_concurrency.lockutils [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:20 np0005480824 nova_compute[260089]: 2025-10-11 03:49:20.957 2 DEBUG oslo_concurrency.lockutils [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:20 np0005480824 nova_compute[260089]: 2025-10-11 03:49:20.958 2 DEBUG oslo_concurrency.lockutils [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:20 np0005480824 nova_compute[260089]: 2025-10-11 03:49:20.958 2 INFO nova.compute.manager [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Attaching volume baf2a9e9-b0d1-4c7c-8981-330d1e617a3e to /dev/vdb#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.087 2 DEBUG os_brick.utils [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.089 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.105 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.105 676 DEBUG oslo.privsep.daemon [-] privsep: reply[9744b67f-880f-47ff-acc8-09611784fc2f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.107 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.116 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.116 676 DEBUG oslo.privsep.daemon [-] privsep: reply[34d0414f-89a5-4b80-94ac-5523d98f4b40]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.118 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.128 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.129 676 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6f9caf-d221-4298-bfe7-951a18eef9a4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.130 676 DEBUG oslo.privsep.daemon [-] privsep: reply[c755e8ca-55f7-4285-9771-cdd402f25bf2]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.132 2 DEBUG oslo_concurrency.processutils [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.155 2 DEBUG oslo_concurrency.processutils [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.158 2 DEBUG os_brick.initiator.connectors.lightos [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.159 2 DEBUG os_brick.initiator.connectors.lightos [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.159 2 DEBUG os_brick.initiator.connectors.lightos [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.160 2 DEBUG os_brick.utils [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.160 2 DEBUG nova.virt.block_device [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Updating existing volume attachment record: c9d88990-51fd-4de1-aca3-7a985c24aad7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:49:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:49:21 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2363356626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.867 2 DEBUG nova.objects.instance [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lazy-loading 'flavor' on Instance uuid c44627c6-7bd8-4e1a-b32f-a79f70a179c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.889 2 DEBUG nova.virt.libvirt.driver [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Attempting to attach volume baf2a9e9-b0d1-4c7c-8981-330d1e617a3e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct 10 23:49:21 np0005480824 nova_compute[260089]: 2025-10-11 03:49:21.893 2 DEBUG nova.virt.libvirt.guest [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] attach device xml: <disk type="network" device="disk">
Oct 10 23:49:21 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:49:21 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-baf2a9e9-b0d1-4c7c-8981-330d1e617a3e">
Oct 10 23:49:21 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:49:21 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:49:21 np0005480824 nova_compute[260089]:  <auth username="openstack">
Oct 10 23:49:21 np0005480824 nova_compute[260089]:    <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:49:21 np0005480824 nova_compute[260089]:  </auth>
Oct 10 23:49:21 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:49:21 np0005480824 nova_compute[260089]:  <serial>baf2a9e9-b0d1-4c7c-8981-330d1e617a3e</serial>
Oct 10 23:49:21 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:49:21 np0005480824 nova_compute[260089]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 10 23:49:22 np0005480824 nova_compute[260089]: 2025-10-11 03:49:22.025 2 DEBUG nova.virt.libvirt.driver [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:49:22 np0005480824 nova_compute[260089]: 2025-10-11 03:49:22.025 2 DEBUG nova.virt.libvirt.driver [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:49:22 np0005480824 nova_compute[260089]: 2025-10-11 03:49:22.026 2 DEBUG nova.virt.libvirt.driver [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:49:22 np0005480824 nova_compute[260089]: 2025-10-11 03:49:22.026 2 DEBUG nova.virt.libvirt.driver [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] No VIF found with MAC fa:16:3e:ba:24:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:49:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct 10 23:49:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct 10 23:49:22 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct 10 23:49:22 np0005480824 nova_compute[260089]: 2025-10-11 03:49:22.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:22 np0005480824 nova_compute[260089]: 2025-10-11 03:49:22.281 2 DEBUG oslo_concurrency.lockutils [None req-1e4ae729-819f-4e82-9e24-ffee6c280e6e d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 201 KiB/s wr, 107 op/s
Oct 10 23:49:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:49:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/426653333' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:49:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:49:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/426653333' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:49:23 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 96351554-f401-4b27-b13d-8c55b3c97873 does not exist
Oct 10 23:49:23 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 431a033b-e6ec-4db2-9a37-ba0cb244a010 does not exist
Oct 10 23:49:23 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev c00f778c-ecd0-4754-b368-3fee99313ccf does not exist
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:49:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:49:23 np0005480824 nova_compute[260089]: 2025-10-11 03:49:23.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.140 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "266aeb27-7f54-4255-9018-0b6092629b80" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.141 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.158 2 DEBUG nova.compute.manager [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:49:24 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:49:24 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:49:24 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.230 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.231 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.241 2 DEBUG nova.virt.hardware [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.241 2 INFO nova.compute.claims [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.357 2 DEBUG oslo_concurrency.processutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:24 np0005480824 podman[273288]: 2025-10-11 03:49:24.429742669 +0000 UTC m=+0.091184081 container create a44720f10cafc767555de9d846e16641696a7a3e08da516c33968f62a2725875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ritchie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 10 23:49:24 np0005480824 podman[273288]: 2025-10-11 03:49:24.392121622 +0000 UTC m=+0.053563094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:49:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct 10 23:49:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct 10 23:49:24 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct 10 23:49:24 np0005480824 systemd[1]: Started libpod-conmon-a44720f10cafc767555de9d846e16641696a7a3e08da516c33968f62a2725875.scope.
Oct 10 23:49:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:49:24 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:49:24 np0005480824 podman[273288]: 2025-10-11 03:49:24.578015897 +0000 UTC m=+0.239457319 container init a44720f10cafc767555de9d846e16641696a7a3e08da516c33968f62a2725875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ritchie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:49:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:49:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2557466184' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:49:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:49:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2557466184' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:49:24 np0005480824 podman[273288]: 2025-10-11 03:49:24.589506158 +0000 UTC m=+0.250947560 container start a44720f10cafc767555de9d846e16641696a7a3e08da516c33968f62a2725875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 23:49:24 np0005480824 podman[273288]: 2025-10-11 03:49:24.59470933 +0000 UTC m=+0.256150732 container attach a44720f10cafc767555de9d846e16641696a7a3e08da516c33968f62a2725875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ritchie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:49:24 np0005480824 systemd[1]: libpod-a44720f10cafc767555de9d846e16641696a7a3e08da516c33968f62a2725875.scope: Deactivated successfully.
Oct 10 23:49:24 np0005480824 peaceful_ritchie[273307]: 167 167
Oct 10 23:49:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 37 KiB/s wr, 90 op/s
Oct 10 23:49:24 np0005480824 conmon[273307]: conmon a44720f10cafc767555d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a44720f10cafc767555de9d846e16641696a7a3e08da516c33968f62a2725875.scope/container/memory.events
Oct 10 23:49:24 np0005480824 podman[273288]: 2025-10-11 03:49:24.6002535 +0000 UTC m=+0.261694932 container died a44720f10cafc767555de9d846e16641696a7a3e08da516c33968f62a2725875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:49:24 np0005480824 systemd[1]: var-lib-containers-storage-overlay-df5e57c1d32753af68749bf67dfe8ab6eb26fa9daf0dcf34f72e6993e34ff74e-merged.mount: Deactivated successfully.
Oct 10 23:49:24 np0005480824 podman[273288]: 2025-10-11 03:49:24.674645826 +0000 UTC m=+0.336087238 container remove a44720f10cafc767555de9d846e16641696a7a3e08da516c33968f62a2725875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ritchie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 10 23:49:24 np0005480824 systemd[1]: libpod-conmon-a44720f10cafc767555de9d846e16641696a7a3e08da516c33968f62a2725875.scope: Deactivated successfully.
Oct 10 23:49:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:49:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506791674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.888 2 DEBUG oslo_concurrency.processutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.899 2 DEBUG nova.compute.provider_tree [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.914 2 DEBUG nova.scheduler.client.report [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.936 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.937 2 DEBUG nova.compute.manager [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:49:24 np0005480824 podman[273349]: 2025-10-11 03:49:24.952625852 +0000 UTC m=+0.080296094 container create 0adca2389ba2329f3b965a42c8aa3c2852f606b6a7279aa262685339d5ae26b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mcclintock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.988 2 DEBUG nova.compute.manager [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:49:24 np0005480824 nova_compute[260089]: 2025-10-11 03:49:24.989 2 DEBUG nova.network.neutron [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.004 2 INFO nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:49:25 np0005480824 systemd[1]: Started libpod-conmon-0adca2389ba2329f3b965a42c8aa3c2852f606b6a7279aa262685339d5ae26b1.scope.
Oct 10 23:49:25 np0005480824 podman[273349]: 2025-10-11 03:49:24.918952208 +0000 UTC m=+0.046622520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:49:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:49:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2880824147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:49:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:49:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2880824147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.032 2 DEBUG nova.compute.manager [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:49:25 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:49:25 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad7466281b4992dd6b01e69db415ef0ed825d05e99953a8a96cd3f79da67509/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:25 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad7466281b4992dd6b01e69db415ef0ed825d05e99953a8a96cd3f79da67509/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:25 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad7466281b4992dd6b01e69db415ef0ed825d05e99953a8a96cd3f79da67509/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:25 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad7466281b4992dd6b01e69db415ef0ed825d05e99953a8a96cd3f79da67509/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:25 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad7466281b4992dd6b01e69db415ef0ed825d05e99953a8a96cd3f79da67509/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:25 np0005480824 podman[273349]: 2025-10-11 03:49:25.081505492 +0000 UTC m=+0.209175734 container init 0adca2389ba2329f3b965a42c8aa3c2852f606b6a7279aa262685339d5ae26b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 10 23:49:25 np0005480824 podman[273349]: 2025-10-11 03:49:25.09411916 +0000 UTC m=+0.221789382 container start 0adca2389ba2329f3b965a42c8aa3c2852f606b6a7279aa262685339d5ae26b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 10 23:49:25 np0005480824 podman[273349]: 2025-10-11 03:49:25.097753676 +0000 UTC m=+0.225423948 container attach 0adca2389ba2329f3b965a42c8aa3c2852f606b6a7279aa262685339d5ae26b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.153 2 DEBUG nova.compute.manager [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.156 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.157 2 INFO nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Creating image(s)#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.197 2 DEBUG nova.storage.rbd_utils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] rbd image 266aeb27-7f54-4255-9018-0b6092629b80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.235 2 DEBUG nova.storage.rbd_utils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] rbd image 266aeb27-7f54-4255-9018-0b6092629b80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.276 2 DEBUG nova.storage.rbd_utils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] rbd image 266aeb27-7f54-4255-9018-0b6092629b80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.282 2 DEBUG oslo_concurrency.processutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.318 2 DEBUG nova.policy [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0dd21dcc2e2e4870bd3a6eb5146bc451', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '69ce475b5af645b7b89607f7ecc196d5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.379 2 DEBUG oslo_concurrency.processutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.381 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.382 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.382 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.405 2 DEBUG nova.storage.rbd_utils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] rbd image 266aeb27-7f54-4255-9018-0b6092629b80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.410 2 DEBUG oslo_concurrency.processutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 266aeb27-7f54-4255-9018-0b6092629b80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.718 2 DEBUG oslo_concurrency.processutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 266aeb27-7f54-4255-9018-0b6092629b80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.796 2 DEBUG nova.storage.rbd_utils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] resizing rbd image 266aeb27-7f54-4255-9018-0b6092629b80_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.913 2 DEBUG nova.objects.instance [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lazy-loading 'migration_context' on Instance uuid 266aeb27-7f54-4255-9018-0b6092629b80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.945 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.945 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Ensure instance console log exists: /var/lib/nova/instances/266aeb27-7f54-4255-9018-0b6092629b80/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.946 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.947 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:25 np0005480824 nova_compute[260089]: 2025-10-11 03:49:25.947 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:26 np0005480824 silly_mcclintock[273367]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:49:26 np0005480824 silly_mcclintock[273367]: --> relative data size: 1.0
Oct 10 23:49:26 np0005480824 silly_mcclintock[273367]: --> All data devices are unavailable
Oct 10 23:49:26 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:26.317 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:49:26 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:26.319 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:49:26 np0005480824 systemd[1]: libpod-0adca2389ba2329f3b965a42c8aa3c2852f606b6a7279aa262685339d5ae26b1.scope: Deactivated successfully.
Oct 10 23:49:26 np0005480824 podman[273349]: 2025-10-11 03:49:26.366520563 +0000 UTC m=+1.494190815 container died 0adca2389ba2329f3b965a42c8aa3c2852f606b6a7279aa262685339d5ae26b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mcclintock, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:49:26 np0005480824 systemd[1]: libpod-0adca2389ba2329f3b965a42c8aa3c2852f606b6a7279aa262685339d5ae26b1.scope: Consumed 1.191s CPU time.
Oct 10 23:49:26 np0005480824 nova_compute[260089]: 2025-10-11 03:49:26.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:26 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0ad7466281b4992dd6b01e69db415ef0ed825d05e99953a8a96cd3f79da67509-merged.mount: Deactivated successfully.
Oct 10 23:49:26 np0005480824 nova_compute[260089]: 2025-10-11 03:49:26.419 2 DEBUG nova.network.neutron [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Successfully created port: c6b37039-6a92-4786-8d2b-febe3f3e7716 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:49:26 np0005480824 podman[273349]: 2025-10-11 03:49:26.462939527 +0000 UTC m=+1.590609779 container remove 0adca2389ba2329f3b965a42c8aa3c2852f606b6a7279aa262685339d5ae26b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:49:26 np0005480824 systemd[1]: libpod-conmon-0adca2389ba2329f3b965a42c8aa3c2852f606b6a7279aa262685339d5ae26b1.scope: Deactivated successfully.
Oct 10 23:49:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct 10 23:49:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct 10 23:49:26 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct 10 23:49:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 4.3 KiB/s wr, 49 op/s
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.027 2 DEBUG nova.network.neutron [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Successfully updated port: c6b37039-6a92-4786-8d2b-febe3f3e7716 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.045 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "refresh_cache-266aeb27-7f54-4255-9018-0b6092629b80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.045 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquired lock "refresh_cache-266aeb27-7f54-4255-9018-0b6092629b80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.046 2 DEBUG nova.network.neutron [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.117 2 DEBUG nova.compute.manager [req-27a2eb21-bb19-4ad5-8af1-b060cec4dba3 req-202b6341-179b-4891-a6f2-daa2a124410d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Received event network-changed-c6b37039-6a92-4786-8d2b-febe3f3e7716 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.118 2 DEBUG nova.compute.manager [req-27a2eb21-bb19-4ad5-8af1-b060cec4dba3 req-202b6341-179b-4891-a6f2-daa2a124410d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Refreshing instance network info cache due to event network-changed-c6b37039-6a92-4786-8d2b-febe3f3e7716. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.118 2 DEBUG oslo_concurrency.lockutils [req-27a2eb21-bb19-4ad5-8af1-b060cec4dba3 req-202b6341-179b-4891-a6f2-daa2a124410d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-266aeb27-7f54-4255-9018-0b6092629b80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.201 2 DEBUG nova.network.neutron [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.298 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.313 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 10 23:49:27 np0005480824 podman[273715]: 2025-10-11 03:49:27.478472761 +0000 UTC m=+0.065451506 container create 1426e9012e561ed7e52ad17d25c67d29c823091dbdd81e80938d0e3690dd2d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_allen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 10 23:49:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct 10 23:49:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct 10 23:49:27 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct 10 23:49:27 np0005480824 systemd[1]: Started libpod-conmon-1426e9012e561ed7e52ad17d25c67d29c823091dbdd81e80938d0e3690dd2d7f.scope.
Oct 10 23:49:27 np0005480824 podman[273715]: 2025-10-11 03:49:27.444432708 +0000 UTC m=+0.031411513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:49:27 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:49:27 np0005480824 podman[273715]: 2025-10-11 03:49:27.580361014 +0000 UTC m=+0.167339869 container init 1426e9012e561ed7e52ad17d25c67d29c823091dbdd81e80938d0e3690dd2d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:49:27 np0005480824 podman[273715]: 2025-10-11 03:49:27.599770912 +0000 UTC m=+0.186749667 container start 1426e9012e561ed7e52ad17d25c67d29c823091dbdd81e80938d0e3690dd2d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:49:27 np0005480824 systemd[1]: libpod-1426e9012e561ed7e52ad17d25c67d29c823091dbdd81e80938d0e3690dd2d7f.scope: Deactivated successfully.
Oct 10 23:49:27 np0005480824 infallible_allen[273732]: 167 167
Oct 10 23:49:27 np0005480824 conmon[273732]: conmon 1426e9012e561ed7e52a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1426e9012e561ed7e52ad17d25c67d29c823091dbdd81e80938d0e3690dd2d7f.scope/container/memory.events
Oct 10 23:49:27 np0005480824 podman[273715]: 2025-10-11 03:49:27.607544445 +0000 UTC m=+0.194523210 container attach 1426e9012e561ed7e52ad17d25c67d29c823091dbdd81e80938d0e3690dd2d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 23:49:27 np0005480824 podman[273715]: 2025-10-11 03:49:27.608700602 +0000 UTC m=+0.195679327 container died 1426e9012e561ed7e52ad17d25c67d29c823091dbdd81e80938d0e3690dd2d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_allen, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:49:27 np0005480824 systemd[1]: var-lib-containers-storage-overlay-f0f0f9632f291c84f18906fd328f991e946320b88e7769516acdcf3f3cff67d1-merged.mount: Deactivated successfully.
Oct 10 23:49:27 np0005480824 podman[273715]: 2025-10-11 03:49:27.652833714 +0000 UTC m=+0.239812439 container remove 1426e9012e561ed7e52ad17d25c67d29c823091dbdd81e80938d0e3690dd2d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:49:27 np0005480824 systemd[1]: libpod-conmon-1426e9012e561ed7e52ad17d25c67d29c823091dbdd81e80938d0e3690dd2d7f.scope: Deactivated successfully.
Oct 10 23:49:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:49:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2087130971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:49:27 np0005480824 podman[273731]: 2025-10-11 03:49:27.742760105 +0000 UTC m=+0.201713569 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 10 23:49:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:49:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:49:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:49:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:49:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:49:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:49:27 np0005480824 podman[273783]: 2025-10-11 03:49:27.900930366 +0000 UTC m=+0.071164880 container create 3c08f972ba81e50bb26ecd489faa944bdd6579b70a9daf906ba16e3991d9d3ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 23:49:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:49:27
Oct 10 23:49:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:49:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:49:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'volumes', 'images']
Oct 10 23:49:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.930 2 DEBUG nova.network.neutron [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Updating instance_info_cache with network_info: [{"id": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "address": "fa:16:3e:3f:63:b3", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6b37039-6a", "ovs_interfaceid": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.949 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Releasing lock "refresh_cache-266aeb27-7f54-4255-9018-0b6092629b80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.950 2 DEBUG nova.compute.manager [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Instance network_info: |[{"id": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "address": "fa:16:3e:3f:63:b3", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6b37039-6a", "ovs_interfaceid": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.951 2 DEBUG oslo_concurrency.lockutils [req-27a2eb21-bb19-4ad5-8af1-b060cec4dba3 req-202b6341-179b-4891-a6f2-daa2a124410d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-266aeb27-7f54-4255-9018-0b6092629b80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.952 2 DEBUG nova.network.neutron [req-27a2eb21-bb19-4ad5-8af1-b060cec4dba3 req-202b6341-179b-4891-a6f2-daa2a124410d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Refreshing network info cache for port c6b37039-6a92-4786-8d2b-febe3f3e7716 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.957 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Start _get_guest_xml network_info=[{"id": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "address": "fa:16:3e:3f:63:b3", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6b37039-6a", "ovs_interfaceid": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:49:27 np0005480824 podman[273783]: 2025-10-11 03:49:27.873175351 +0000 UTC m=+0.043409935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.966 2 WARNING nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:49:27 np0005480824 systemd[1]: Started libpod-conmon-3c08f972ba81e50bb26ecd489faa944bdd6579b70a9daf906ba16e3991d9d3ac.scope.
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.980 2 DEBUG nova.virt.libvirt.host [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.982 2 DEBUG nova.virt.libvirt.host [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.990 2 DEBUG nova.virt.libvirt.host [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.990 2 DEBUG nova.virt.libvirt.host [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.991 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.991 2 DEBUG nova.virt.hardware [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.992 2 DEBUG nova.virt.hardware [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.992 2 DEBUG nova.virt.hardware [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.992 2 DEBUG nova.virt.hardware [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.992 2 DEBUG nova.virt.hardware [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.993 2 DEBUG nova.virt.hardware [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.993 2 DEBUG nova.virt.hardware [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.993 2 DEBUG nova.virt.hardware [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.994 2 DEBUG nova.virt.hardware [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.994 2 DEBUG nova.virt.hardware [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:49:27 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.994 2 DEBUG nova.virt.hardware [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:49:28 np0005480824 nova_compute[260089]: 2025-10-11 03:49:27.999 2 DEBUG oslo_concurrency.processutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:28 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:49:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784575cf6e12a11b99955bb67eb8f5eec73186fa3c46202b5e9885cae98149cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784575cf6e12a11b99955bb67eb8f5eec73186fa3c46202b5e9885cae98149cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784575cf6e12a11b99955bb67eb8f5eec73186fa3c46202b5e9885cae98149cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:28 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784575cf6e12a11b99955bb67eb8f5eec73186fa3c46202b5e9885cae98149cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:28 np0005480824 podman[273783]: 2025-10-11 03:49:28.032421767 +0000 UTC m=+0.202656391 container init 3c08f972ba81e50bb26ecd489faa944bdd6579b70a9daf906ba16e3991d9d3ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cartwright, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:49:28 np0005480824 podman[273783]: 2025-10-11 03:49:28.049122831 +0000 UTC m=+0.219357385 container start 3c08f972ba81e50bb26ecd489faa944bdd6579b70a9daf906ba16e3991d9d3ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:49:28 np0005480824 podman[273783]: 2025-10-11 03:49:28.054509207 +0000 UTC m=+0.224743771 container attach 3c08f972ba81e50bb26ecd489faa944bdd6579b70a9daf906ba16e3991d9d3ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cartwright, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:49:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:49:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:49:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:49:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:49:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:49:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:49:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:49:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:49:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:49:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:49:28 np0005480824 nova_compute[260089]: 2025-10-11 03:49:28.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:49:28 np0005480824 nova_compute[260089]: 2025-10-11 03:49:28.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:49:28 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:28.321 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:49:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3255992894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:49:28 np0005480824 nova_compute[260089]: 2025-10-11 03:49:28.457 2 DEBUG oslo_concurrency.processutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:28 np0005480824 nova_compute[260089]: 2025-10-11 03:49:28.485 2 DEBUG nova.storage.rbd_utils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] rbd image 266aeb27-7f54-4255-9018-0b6092629b80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:49:28 np0005480824 nova_compute[260089]: 2025-10-11 03:49:28.490 2 DEBUG oslo_concurrency.processutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct 10 23:49:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct 10 23:49:28 np0005480824 nova_compute[260089]: 2025-10-11 03:49:28.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:28 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct 10 23:49:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 213 KiB/s rd, 5.2 MiB/s wr, 286 op/s
Oct 10 23:49:28 np0005480824 ceph-mgr[74617]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3841581780
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]: {
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:    "0": [
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:        {
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "devices": [
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "/dev/loop3"
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            ],
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_name": "ceph_lv0",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_size": "21470642176",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "name": "ceph_lv0",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "tags": {
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.cluster_name": "ceph",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.crush_device_class": "",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.encrypted": "0",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.osd_id": "0",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.type": "block",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.vdo": "0"
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            },
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "type": "block",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "vg_name": "ceph_vg0"
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:        }
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:    ],
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:    "1": [
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:        {
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "devices": [
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "/dev/loop4"
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            ],
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_name": "ceph_lv1",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_size": "21470642176",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "name": "ceph_lv1",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "tags": {
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.cluster_name": "ceph",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.crush_device_class": "",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.encrypted": "0",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.osd_id": "1",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.type": "block",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.vdo": "0"
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            },
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "type": "block",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "vg_name": "ceph_vg1"
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:        }
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:    ],
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:    "2": [
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:        {
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "devices": [
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "/dev/loop5"
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            ],
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_name": "ceph_lv2",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_size": "21470642176",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "name": "ceph_lv2",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "tags": {
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.cluster_name": "ceph",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.crush_device_class": "",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.encrypted": "0",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.osd_id": "2",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.type": "block",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:                "ceph.vdo": "0"
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            },
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "type": "block",
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:            "vg_name": "ceph_vg2"
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:        }
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]:    ]
Oct 10 23:49:28 np0005480824 sweet_cartwright[273799]: }
Oct 10 23:49:28 np0005480824 systemd[1]: libpod-3c08f972ba81e50bb26ecd489faa944bdd6579b70a9daf906ba16e3991d9d3ac.scope: Deactivated successfully.
Oct 10 23:49:28 np0005480824 podman[273868]: 2025-10-11 03:49:28.932998969 +0000 UTC m=+0.032718652 container died 3c08f972ba81e50bb26ecd489faa944bdd6579b70a9daf906ba16e3991d9d3ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cartwright, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 10 23:49:28 np0005480824 systemd[1]: var-lib-containers-storage-overlay-784575cf6e12a11b99955bb67eb8f5eec73186fa3c46202b5e9885cae98149cc-merged.mount: Deactivated successfully.
Oct 10 23:49:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:49:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/876232474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:49:28 np0005480824 nova_compute[260089]: 2025-10-11 03:49:28.981 2 DEBUG nova.network.neutron [req-27a2eb21-bb19-4ad5-8af1-b060cec4dba3 req-202b6341-179b-4891-a6f2-daa2a124410d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Updated VIF entry in instance network info cache for port c6b37039-6a92-4786-8d2b-febe3f3e7716. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:49:28 np0005480824 nova_compute[260089]: 2025-10-11 03:49:28.985 2 DEBUG nova.network.neutron [req-27a2eb21-bb19-4ad5-8af1-b060cec4dba3 req-202b6341-179b-4891-a6f2-daa2a124410d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Updating instance_info_cache with network_info: [{"id": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "address": "fa:16:3e:3f:63:b3", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6b37039-6a", "ovs_interfaceid": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:49:28 np0005480824 nova_compute[260089]: 2025-10-11 03:49:28.993 2 DEBUG oslo_concurrency.processutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:28 np0005480824 nova_compute[260089]: 2025-10-11 03:49:28.995 2 DEBUG nova.virt.libvirt.vif [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:49:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-280413383',display_name='tempest-VolumesBackupsTest-instance-280413383',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-280413383',id=6,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhgy0EIWp65igZG9TVEs9YGw1G6ZyMcaN2BBBudrmzwVtnKHdxyFsbmVQjmSOPEVBwShmZWx6lQroFGYQCanPMH+jPWV8YBMG0D0qW4UPxkpvOP9Msp1Asd/gfDFNTmzw==',key_name='tempest-keypair-1381245901',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='69ce475b5af645b7b89607f7ecc196d5',ramdisk_id='',reservation_id='r-or7yx0nk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1570005285',owner_user_name='tempest-VolumesBackupsTest-1570005285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:49:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0dd21dcc2e2e4870bd3a6eb5146bc451',uuid=266aeb27-7f54-4255-9018-0b6092629b80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "address": "fa:16:3e:3f:63:b3", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6b37039-6a", "ovs_interfaceid": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:49:28 np0005480824 nova_compute[260089]: 2025-10-11 03:49:28.996 2 DEBUG nova.network.os_vif_util [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Converting VIF {"id": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "address": "fa:16:3e:3f:63:b3", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6b37039-6a", "ovs_interfaceid": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:49:28 np0005480824 nova_compute[260089]: 2025-10-11 03:49:28.998 2 DEBUG nova.network.os_vif_util [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:63:b3,bridge_name='br-int',has_traffic_filtering=True,id=c6b37039-6a92-4786-8d2b-febe3f3e7716,network=Network(53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6b37039-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.000 2 DEBUG nova.objects.instance [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 266aeb27-7f54-4255-9018-0b6092629b80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.004 2 DEBUG oslo_concurrency.lockutils [req-27a2eb21-bb19-4ad5-8af1-b060cec4dba3 req-202b6341-179b-4891-a6f2-daa2a124410d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-266aeb27-7f54-4255-9018-0b6092629b80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.014 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  <uuid>266aeb27-7f54-4255-9018-0b6092629b80</uuid>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  <name>instance-00000006</name>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <nova:name>tempest-VolumesBackupsTest-instance-280413383</nova:name>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:49:27</nova:creationTime>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:        <nova:user uuid="0dd21dcc2e2e4870bd3a6eb5146bc451">tempest-VolumesBackupsTest-1570005285-project-member</nova:user>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:        <nova:project uuid="69ce475b5af645b7b89607f7ecc196d5">tempest-VolumesBackupsTest-1570005285</nova:project>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:        <nova:port uuid="c6b37039-6a92-4786-8d2b-febe3f3e7716">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <entry name="serial">266aeb27-7f54-4255-9018-0b6092629b80</entry>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <entry name="uuid">266aeb27-7f54-4255-9018-0b6092629b80</entry>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/266aeb27-7f54-4255-9018-0b6092629b80_disk">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/266aeb27-7f54-4255-9018-0b6092629b80_disk.config">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:3f:63:b3"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <target dev="tapc6b37039-6a"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/266aeb27-7f54-4255-9018-0b6092629b80/console.log" append="off"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:49:29 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:49:29 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:49:29 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:49:29 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:49:29 np0005480824 podman[273868]: 2025-10-11 03:49:29.016070899 +0000 UTC m=+0.115790512 container remove 3c08f972ba81e50bb26ecd489faa944bdd6579b70a9daf906ba16e3991d9d3ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.016 2 DEBUG nova.compute.manager [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Preparing to wait for external event network-vif-plugged-c6b37039-6a92-4786-8d2b-febe3f3e7716 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.017 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "266aeb27-7f54-4255-9018-0b6092629b80-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.017 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.018 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.019 2 DEBUG nova.virt.libvirt.vif [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:49:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-280413383',display_name='tempest-VolumesBackupsTest-instance-280413383',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-280413383',id=6,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhgy0EIWp65igZG9TVEs9YGw1G6ZyMcaN2BBBudrmzwVtnKHdxyFsbmVQjmSOPEVBwShmZWx6lQroFGYQCanPMH+jPWV8YBMG0D0qW4UPxkpvOP9Msp1Asd/gfDFNTmzw==',key_name='tempest-keypair-1381245901',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='69ce475b5af645b7b89607f7ecc196d5',ramdisk_id='',reservation_id='r-or7yx0nk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1570005285',owner_user_name='tempest-VolumesBackupsTest-1570005285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:49:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0dd21dcc2e2e4870bd3a6eb5146bc451',uuid=266aeb27-7f54-4255-9018-0b6092629b80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "address": "fa:16:3e:3f:63:b3", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6b37039-6a", "ovs_interfaceid": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.020 2 DEBUG nova.network.os_vif_util [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Converting VIF {"id": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "address": "fa:16:3e:3f:63:b3", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6b37039-6a", "ovs_interfaceid": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.021 2 DEBUG nova.network.os_vif_util [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:63:b3,bridge_name='br-int',has_traffic_filtering=True,id=c6b37039-6a92-4786-8d2b-febe3f3e7716,network=Network(53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6b37039-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.022 2 DEBUG os_vif [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:63:b3,bridge_name='br-int',has_traffic_filtering=True,id=c6b37039-6a92-4786-8d2b-febe3f3e7716,network=Network(53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6b37039-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.024 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.024 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:49:29 np0005480824 systemd[1]: libpod-conmon-3c08f972ba81e50bb26ecd489faa944bdd6579b70a9daf906ba16e3991d9d3ac.scope: Deactivated successfully.
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6b37039-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6b37039-6a, col_values=(('external_ids', {'iface-id': 'c6b37039-6a92-4786-8d2b-febe3f3e7716', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:63:b3', 'vm-uuid': '266aeb27-7f54-4255-9018-0b6092629b80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:29 np0005480824 NetworkManager[44969]: <info>  [1760154569.0360] manager: (tapc6b37039-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.046 2 INFO os_vif [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:63:b3,bridge_name='br-int',has_traffic_filtering=True,id=c6b37039-6a92-4786-8d2b-febe3f3e7716,network=Network(53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6b37039-6a')#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.117 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.119 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.119 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] No VIF found with MAC fa:16:3e:3f:63:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.120 2 INFO nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Using config drive#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.154 2 DEBUG nova.storage.rbd_utils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] rbd image 266aeb27-7f54-4255-9018-0b6092629b80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.308 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.309 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.309 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:49:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:49:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct 10 23:49:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct 10 23:49:29 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.666 2 INFO nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Creating config drive at /var/lib/nova/instances/266aeb27-7f54-4255-9018-0b6092629b80/disk.config#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.682 2 DEBUG oslo_concurrency.processutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/266aeb27-7f54-4255-9018-0b6092629b80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5ltulnju execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.825 2 DEBUG oslo_concurrency.processutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/266aeb27-7f54-4255-9018-0b6092629b80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5ltulnju" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.849 2 DEBUG nova.storage.rbd_utils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] rbd image 266aeb27-7f54-4255-9018-0b6092629b80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:49:29 np0005480824 nova_compute[260089]: 2025-10-11 03:49:29.853 2 DEBUG oslo_concurrency.processutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/266aeb27-7f54-4255-9018-0b6092629b80/disk.config 266aeb27-7f54-4255-9018-0b6092629b80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:29 np0005480824 podman[274050]: 2025-10-11 03:49:29.856540874 +0000 UTC m=+0.070630468 container create dafa04d5f7e4eece02b5465be0ce70db1a6edffdaf4fb363e967b5d4a52aa483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 10 23:49:29 np0005480824 podman[274050]: 2025-10-11 03:49:29.817530284 +0000 UTC m=+0.031619968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:49:29 np0005480824 systemd[1]: Started libpod-conmon-dafa04d5f7e4eece02b5465be0ce70db1a6edffdaf4fb363e967b5d4a52aa483.scope.
Oct 10 23:49:29 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:49:29 np0005480824 podman[274050]: 2025-10-11 03:49:29.970302097 +0000 UTC m=+0.184391771 container init dafa04d5f7e4eece02b5465be0ce70db1a6edffdaf4fb363e967b5d4a52aa483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 23:49:29 np0005480824 podman[274050]: 2025-10-11 03:49:29.984174174 +0000 UTC m=+0.198263808 container start dafa04d5f7e4eece02b5465be0ce70db1a6edffdaf4fb363e967b5d4a52aa483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 10 23:49:29 np0005480824 podman[274050]: 2025-10-11 03:49:29.987863931 +0000 UTC m=+0.201953625 container attach dafa04d5f7e4eece02b5465be0ce70db1a6edffdaf4fb363e967b5d4a52aa483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 10 23:49:29 np0005480824 systemd[1]: libpod-dafa04d5f7e4eece02b5465be0ce70db1a6edffdaf4fb363e967b5d4a52aa483.scope: Deactivated successfully.
Oct 10 23:49:29 np0005480824 busy_albattani[274085]: 167 167
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.052 2 DEBUG oslo_concurrency.processutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/266aeb27-7f54-4255-9018-0b6092629b80/disk.config 266aeb27-7f54-4255-9018-0b6092629b80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.054 2 INFO nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Deleting local config drive /var/lib/nova/instances/266aeb27-7f54-4255-9018-0b6092629b80/disk.config because it was imported into RBD.#033[00m
Oct 10 23:49:30 np0005480824 podman[274106]: 2025-10-11 03:49:30.065107282 +0000 UTC m=+0.052293864 container died dafa04d5f7e4eece02b5465be0ce70db1a6edffdaf4fb363e967b5d4a52aa483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:49:30 np0005480824 systemd[1]: var-lib-containers-storage-overlay-56d671687aedf1b48e587a09225a3a4f52f85fd7f92ebd02dfe503e60315ff69-merged.mount: Deactivated successfully.
Oct 10 23:49:30 np0005480824 podman[274106]: 2025-10-11 03:49:30.110871692 +0000 UTC m=+0.098058214 container remove dafa04d5f7e4eece02b5465be0ce70db1a6edffdaf4fb363e967b5d4a52aa483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:49:30 np0005480824 systemd[1]: libpod-conmon-dafa04d5f7e4eece02b5465be0ce70db1a6edffdaf4fb363e967b5d4a52aa483.scope: Deactivated successfully.
Oct 10 23:49:30 np0005480824 kernel: tapc6b37039-6a: entered promiscuous mode
Oct 10 23:49:30 np0005480824 NetworkManager[44969]: <info>  [1760154570.1379] manager: (tapc6b37039-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct 10 23:49:30 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:30Z|00069|binding|INFO|Claiming lport c6b37039-6a92-4786-8d2b-febe3f3e7716 for this chassis.
Oct 10 23:49:30 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:30Z|00070|binding|INFO|c6b37039-6a92-4786-8d2b-febe3f3e7716: Claiming fa:16:3e:3f:63:b3 10.100.0.11
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:30 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:30Z|00071|binding|INFO|Setting lport c6b37039-6a92-4786-8d2b-febe3f3e7716 ovn-installed in OVS
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:30 np0005480824 systemd-machined[215071]: New machine qemu-6-instance-00000006.
Oct 10 23:49:30 np0005480824 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Oct 10 23:49:30 np0005480824 systemd-udevd[274138]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:49:30 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:30Z|00072|binding|INFO|Setting lport c6b37039-6a92-4786-8d2b-febe3f3e7716 up in Southbound
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.226 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:63:b3 10.100.0.11'], port_security=['fa:16:3e:3f:63:b3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '266aeb27-7f54-4255-9018-0b6092629b80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '69ce475b5af645b7b89607f7ecc196d5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f6b945b3-d5d1-471a-9062-b88150248abb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8dd8adb-2052-443b-8fa5-01e320e55d02, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=c6b37039-6a92-4786-8d2b-febe3f3e7716) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.228 162245 INFO neutron.agent.ovn.metadata.agent [-] Port c6b37039-6a92-4786-8d2b-febe3f3e7716 in datapath 53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0 bound to our chassis#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.229 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0#033[00m
Oct 10 23:49:30 np0005480824 NetworkManager[44969]: <info>  [1760154570.2355] device (tapc6b37039-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:49:30 np0005480824 NetworkManager[44969]: <info>  [1760154570.2369] device (tapc6b37039-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.244 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[707c9987-65aa-454a-99a2-ca92bd67831f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.246 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap53e5ffdf-11 in ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.249 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap53e5ffdf-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.249 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4668298c-4076-4e69-90eb-ec0a641d2452]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.251 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b37df4-9373-47b4-98b9-f137f4b315e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.268 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[4016a7e0-0062-43f3-9ca9-c43f665c24d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.287 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[6599f28f-ee0e-4eaa-835e-efca8434fc6d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.298 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.298 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.315 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.332 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[98003867-99df-4334-94ad-54f29b615c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 NetworkManager[44969]: <info>  [1760154570.3417] manager: (tap53e5ffdf-10): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Oct 10 23:49:30 np0005480824 systemd-udevd[274147]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.340 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d42919-9ba2-42ac-9fae-2ce6225c892f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 podman[274154]: 2025-10-11 03:49:30.347896413 +0000 UTC m=+0.064152794 container create e32d972c861a93e5eee60870a994f0503851a8474fb6ff893b3cb7c58d509578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.397 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[d66b2b6a-e219-4bb6-800b-b8b527d39778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 systemd[1]: Started libpod-conmon-e32d972c861a93e5eee60870a994f0503851a8474fb6ff893b3cb7c58d509578.scope.
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.402 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[2692a973-87fa-4d46-b50a-85945e682248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 podman[274154]: 2025-10-11 03:49:30.323182001 +0000 UTC m=+0.039438422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:49:30 np0005480824 NetworkManager[44969]: <info>  [1760154570.4396] device (tap53e5ffdf-10): carrier: link connected
Oct 10 23:49:30 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.441 2 DEBUG nova.compute.manager [req-c01dfaaa-560f-4f3d-b109-c3e6d0392fdc req-a62562e0-88c8-4d5b-a185-f5640ca617b5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Received event network-vif-plugged-c6b37039-6a92-4786-8d2b-febe3f3e7716 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.442 2 DEBUG oslo_concurrency.lockutils [req-c01dfaaa-560f-4f3d-b109-c3e6d0392fdc req-a62562e0-88c8-4d5b-a185-f5640ca617b5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "266aeb27-7f54-4255-9018-0b6092629b80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.443 2 DEBUG oslo_concurrency.lockutils [req-c01dfaaa-560f-4f3d-b109-c3e6d0392fdc req-a62562e0-88c8-4d5b-a185-f5640ca617b5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.443 2 DEBUG oslo_concurrency.lockutils [req-c01dfaaa-560f-4f3d-b109-c3e6d0392fdc req-a62562e0-88c8-4d5b-a185-f5640ca617b5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.443 2 DEBUG nova.compute.manager [req-c01dfaaa-560f-4f3d-b109-c3e6d0392fdc req-a62562e0-88c8-4d5b-a185-f5640ca617b5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Processing event network-vif-plugged-c6b37039-6a92-4786-8d2b-febe3f3e7716 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:49:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a060d81342b8267f6a600aed159f5913bb19221ff9ee02cf1185e7c9e25963c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a060d81342b8267f6a600aed159f5913bb19221ff9ee02cf1185e7c9e25963c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a060d81342b8267f6a600aed159f5913bb19221ff9ee02cf1185e7c9e25963c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a060d81342b8267f6a600aed159f5913bb19221ff9ee02cf1185e7c9e25963c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.452 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[eac3f15e-fc3b-40ee-8e4e-abad72e21fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 podman[274154]: 2025-10-11 03:49:30.463380127 +0000 UTC m=+0.179636568 container init e32d972c861a93e5eee60870a994f0503851a8474fb6ff893b3cb7c58d509578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 23:49:30 np0005480824 podman[274154]: 2025-10-11 03:49:30.472463991 +0000 UTC m=+0.188720382 container start e32d972c861a93e5eee60870a994f0503851a8474fb6ff893b3cb7c58d509578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 23:49:30 np0005480824 podman[274154]: 2025-10-11 03:49:30.476341643 +0000 UTC m=+0.192598034 container attach e32d972c861a93e5eee60870a994f0503851a8474fb6ff893b3cb7c58d509578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.482 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[57949dc3-a894-4965-917e-8cc06735c988]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53e5ffdf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:e0:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403810, 'reachable_time': 34512, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274195, 'error': None, 'target': 'ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.516 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4feca2-4008-4f2c-83f5-13fbc32c566d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:e043'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403810, 'tstamp': 403810}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274198, 'error': None, 'target': 'ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.543 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c7566175-9a4a-4b64-abff-aaf170ef0966]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53e5ffdf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:e0:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403810, 'reachable_time': 34512, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274199, 'error': None, 'target': 'ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.556 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "refresh_cache-c44627c6-7bd8-4e1a-b32f-a79f70a179c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.557 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquired lock "refresh_cache-c44627c6-7bd8-4e1a-b32f-a79f70a179c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.557 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.557 2 DEBUG nova.objects.instance [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c44627c6-7bd8-4e1a-b32f-a79f70a179c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.596 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d3846996-760f-4795-a309-0a4e020c51e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 214 KiB/s rd, 5.2 MiB/s wr, 287 op/s
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.703 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e6820985-d4f9-44a4-9ff3-3f06159994bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.705 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53e5ffdf-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.706 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.707 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53e5ffdf-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:30 np0005480824 kernel: tap53e5ffdf-10: entered promiscuous mode
Oct 10 23:49:30 np0005480824 NetworkManager[44969]: <info>  [1760154570.7106] manager: (tap53e5ffdf-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.717 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap53e5ffdf-10, col_values=(('external_ids', {'iface-id': 'e3d8cf16-8a21-4a19-8fd9-2779fca0c5ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:30 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:30Z|00073|binding|INFO|Releasing lport e3d8cf16-8a21-4a19-8fd9-2779fca0c5ae from this chassis (sb_readonly=0)
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.725 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.727 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7da6f2-479e-40f0-b8ed-4965636d603e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.729 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0.pid.haproxy
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:49:30 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:30.730 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'env', 'PROCESS_TAG=haproxy-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:49:30 np0005480824 nova_compute[260089]: 2025-10-11 03:49:30.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:31 np0005480824 podman[274267]: 2025-10-11 03:49:31.113241526 +0000 UTC m=+0.047737337 container create babbf34917722bf9c57fdaedd908f1a9c22b9251eb32c834e64fe492b310f708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:49:31 np0005480824 systemd[1]: Started libpod-conmon-babbf34917722bf9c57fdaedd908f1a9c22b9251eb32c834e64fe492b310f708.scope.
Oct 10 23:49:31 np0005480824 podman[274267]: 2025-10-11 03:49:31.087653562 +0000 UTC m=+0.022149393 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:49:31 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:49:31 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c404156d618cba5d99ae7e5a5e52f23269730a7c1567e786ddf7484fbab665d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:31 np0005480824 podman[274267]: 2025-10-11 03:49:31.235402387 +0000 UTC m=+0.169898218 container init babbf34917722bf9c57fdaedd908f1a9c22b9251eb32c834e64fe492b310f708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 10 23:49:31 np0005480824 podman[274267]: 2025-10-11 03:49:31.242459323 +0000 UTC m=+0.176955134 container start babbf34917722bf9c57fdaedd908f1a9c22b9251eb32c834e64fe492b310f708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:49:31 np0005480824 neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0[274289]: [NOTICE]   (274297) : New worker (274301) forked
Oct 10 23:49:31 np0005480824 neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0[274289]: [NOTICE]   (274297) : Loading success.
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]: {
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "osd_id": 0,
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "type": "bluestore"
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:    },
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "osd_id": 1,
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "type": "bluestore"
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:    },
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "osd_id": 2,
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:        "type": "bluestore"
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]:    }
Oct 10 23:49:31 np0005480824 mystifying_hertz[274190]: }
Oct 10 23:49:31 np0005480824 systemd[1]: libpod-e32d972c861a93e5eee60870a994f0503851a8474fb6ff893b3cb7c58d509578.scope: Deactivated successfully.
Oct 10 23:49:31 np0005480824 systemd[1]: libpod-e32d972c861a93e5eee60870a994f0503851a8474fb6ff893b3cb7c58d509578.scope: Consumed 1.058s CPU time.
Oct 10 23:49:31 np0005480824 podman[274154]: 2025-10-11 03:49:31.546580338 +0000 UTC m=+1.262836759 container died e32d972c861a93e5eee60870a994f0503851a8474fb6ff893b3cb7c58d509578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:49:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Oct 10 23:49:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Oct 10 23:49:31 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Oct 10 23:49:31 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a060d81342b8267f6a600aed159f5913bb19221ff9ee02cf1185e7c9e25963c3-merged.mount: Deactivated successfully.
Oct 10 23:49:31 np0005480824 podman[274154]: 2025-10-11 03:49:31.632303159 +0000 UTC m=+1.348559540 container remove e32d972c861a93e5eee60870a994f0503851a8474fb6ff893b3cb7c58d509578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:49:31 np0005480824 systemd[1]: libpod-conmon-e32d972c861a93e5eee60870a994f0503851a8474fb6ff893b3cb7c58d509578.scope: Deactivated successfully.
Oct 10 23:49:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:49:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:49:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:49:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:49:31 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 9be0f574-5c3d-4638-995d-e0ecb6f3bd36 does not exist
Oct 10 23:49:31 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev c96f52b9-1530-474d-8cff-0528b5311c36 does not exist
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.720 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154571.7199628, 266aeb27-7f54-4255-9018-0b6092629b80 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.721 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] VM Started (Lifecycle Event)#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.724 2 DEBUG nova.compute.manager [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.730 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.735 2 INFO nova.virt.libvirt.driver [-] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Instance spawned successfully.#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.736 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.739 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Updating instance_info_cache with network_info: [{"id": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "address": "fa:16:3e:ba:24:f3", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55e4f905-eb", "ovs_interfaceid": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.745 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.750 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.762 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Releasing lock "refresh_cache-c44627c6-7bd8-4e1a-b32f-a79f70a179c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.763 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.766 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.767 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.773 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.773 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.774 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154571.7236156, 266aeb27-7f54-4255-9018-0b6092629b80 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.774 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.778 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.779 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.779 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.779 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.780 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.780 2 DEBUG nova.virt.libvirt.driver [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.839 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.845 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154571.7277462, 266aeb27-7f54-4255-9018-0b6092629b80 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.846 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.869 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.870 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.871 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.871 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.871 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.905 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.911 2 INFO nova.compute.manager [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Took 6.76 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.912 2 DEBUG nova.compute.manager [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.916 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.951 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:49:31 np0005480824 nova_compute[260089]: 2025-10-11 03:49:31.984 2 INFO nova.compute.manager [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Took 7.78 seconds to build instance.#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.009 2 DEBUG oslo_concurrency.lockutils [None req-f11e5ede-a2d7-4ec5-b077-b87da229cfc8 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:49:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/626599395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.368 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.442 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.442 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.443 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.446 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.447 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.505 2 DEBUG nova.compute.manager [req-5b3adef8-381f-4d13-92fe-f9bcd64f4c9b req-1198c2f4-c098-4f6b-89ca-33ec69861ae2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Received event network-vif-plugged-c6b37039-6a92-4786-8d2b-febe3f3e7716 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.506 2 DEBUG oslo_concurrency.lockutils [req-5b3adef8-381f-4d13-92fe-f9bcd64f4c9b req-1198c2f4-c098-4f6b-89ca-33ec69861ae2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "266aeb27-7f54-4255-9018-0b6092629b80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.506 2 DEBUG oslo_concurrency.lockutils [req-5b3adef8-381f-4d13-92fe-f9bcd64f4c9b req-1198c2f4-c098-4f6b-89ca-33ec69861ae2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.506 2 DEBUG oslo_concurrency.lockutils [req-5b3adef8-381f-4d13-92fe-f9bcd64f4c9b req-1198c2f4-c098-4f6b-89ca-33ec69861ae2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.506 2 DEBUG nova.compute.manager [req-5b3adef8-381f-4d13-92fe-f9bcd64f4c9b req-1198c2f4-c098-4f6b-89ca-33ec69861ae2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] No waiting events found dispatching network-vif-plugged-c6b37039-6a92-4786-8d2b-febe3f3e7716 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.507 2 WARNING nova.compute.manager [req-5b3adef8-381f-4d13-92fe-f9bcd64f4c9b req-1198c2f4-c098-4f6b-89ca-33ec69861ae2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Received unexpected event network-vif-plugged-c6b37039-6a92-4786-8d2b-febe3f3e7716 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:49:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:49:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:49:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 317 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 357 KiB/s rd, 647 KiB/s wr, 170 op/s
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.646 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.647 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4369MB free_disk=59.92194747924805GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.647 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.647 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Oct 10 23:49:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Oct 10 23:49:32 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.890 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance c44627c6-7bd8-4e1a-b32f-a79f70a179c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.890 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 266aeb27-7f54-4255-9018-0b6092629b80 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.891 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:49:32 np0005480824 nova_compute[260089]: 2025-10-11 03:49:32.891 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.044 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:49:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/286658240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.515 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.520 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.536 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.561 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.562 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.562 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.563 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.704 2 DEBUG oslo_concurrency.lockutils [None req-9c1a1b50-2a0d-4cb8-8ad5-54089e027cc7 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.705 2 DEBUG oslo_concurrency.lockutils [None req-9c1a1b50-2a0d-4cb8-8ad5-54089e027cc7 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Oct 10 23:49:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.718 2 INFO nova.compute.manager [None req-9c1a1b50-2a0d-4cb8-8ad5-54089e027cc7 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Detaching volume baf2a9e9-b0d1-4c7c-8981-330d1e617a3e#033[00m
Oct 10 23:49:33 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.838 2 DEBUG oslo_concurrency.lockutils [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.867 2 INFO nova.virt.block_device [None req-9c1a1b50-2a0d-4cb8-8ad5-54089e027cc7 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Attempting to driver detach volume baf2a9e9-b0d1-4c7c-8981-330d1e617a3e from mountpoint /dev/vdb#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.877 2 DEBUG nova.virt.libvirt.driver [None req-9c1a1b50-2a0d-4cb8-8ad5-54089e027cc7 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Attempting to detach device vdb from instance c44627c6-7bd8-4e1a-b32f-a79f70a179c7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.878 2 DEBUG nova.virt.libvirt.guest [None req-9c1a1b50-2a0d-4cb8-8ad5-54089e027cc7 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:49:33 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:49:33 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-baf2a9e9-b0d1-4c7c-8981-330d1e617a3e">
Oct 10 23:49:33 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:49:33 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:49:33 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:49:33 np0005480824 nova_compute[260089]:  <serial>baf2a9e9-b0d1-4c7c-8981-330d1e617a3e</serial>
Oct 10 23:49:33 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:49:33 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:49:33 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.886 2 INFO nova.virt.libvirt.driver [None req-9c1a1b50-2a0d-4cb8-8ad5-54089e027cc7 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Successfully detached device vdb from instance c44627c6-7bd8-4e1a-b32f-a79f70a179c7 from the persistent domain config.#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.886 2 DEBUG nova.virt.libvirt.driver [None req-9c1a1b50-2a0d-4cb8-8ad5-54089e027cc7 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c44627c6-7bd8-4e1a-b32f-a79f70a179c7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 10 23:49:33 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.887 2 DEBUG nova.virt.libvirt.guest [None req-9c1a1b50-2a0d-4cb8-8ad5-54089e027cc7 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:49:33 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:49:33 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-baf2a9e9-b0d1-4c7c-8981-330d1e617a3e">
Oct 10 23:49:33 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:49:33 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:49:33 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:49:33 np0005480824 nova_compute[260089]:  <serial>baf2a9e9-b0d1-4c7c-8981-330d1e617a3e</serial>
Oct 10 23:49:33 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:49:33 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:49:33 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:33.999 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Received event <DeviceRemovedEvent: 1760154573.9990876, c44627c6-7bd8-4e1a-b32f-a79f70a179c7 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.001 2 DEBUG nova.virt.libvirt.driver [None req-9c1a1b50-2a0d-4cb8-8ad5-54089e027cc7 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c44627c6-7bd8-4e1a-b32f-a79f70a179c7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.003 2 INFO nova.virt.libvirt.driver [None req-9c1a1b50-2a0d-4cb8-8ad5-54089e027cc7 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Successfully detached device vdb from instance c44627c6-7bd8-4e1a-b32f-a79f70a179c7 from the live domain config.#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:49:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/773210546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:49:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:49:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/773210546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.222 2 DEBUG nova.objects.instance [None req-9c1a1b50-2a0d-4cb8-8ad5-54089e027cc7 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lazy-loading 'flavor' on Instance uuid c44627c6-7bd8-4e1a-b32f-a79f70a179c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.264 2 DEBUG oslo_concurrency.lockutils [None req-9c1a1b50-2a0d-4cb8-8ad5-54089e027cc7 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.265 2 DEBUG oslo_concurrency.lockutils [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.265 2 DEBUG oslo_concurrency.lockutils [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.265 2 DEBUG oslo_concurrency.lockutils [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.266 2 DEBUG oslo_concurrency.lockutils [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.267 2 INFO nova.compute.manager [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Terminating instance#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.268 2 DEBUG nova.compute.manager [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:49:34 np0005480824 kernel: tap55e4f905-eb (unregistering): left promiscuous mode
Oct 10 23:49:34 np0005480824 NetworkManager[44969]: <info>  [1760154574.3328] device (tap55e4f905-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:49:34 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:34Z|00074|binding|INFO|Releasing lport 55e4f905-eb1b-4b14-ab56-c88a38fe3b3d from this chassis (sb_readonly=0)
Oct 10 23:49:34 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:34Z|00075|binding|INFO|Setting lport 55e4f905-eb1b-4b14-ab56-c88a38fe3b3d down in Southbound
Oct 10 23:49:34 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:34Z|00076|binding|INFO|Removing iface tap55e4f905-eb ovn-installed in OVS
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:34.358 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:24:f3 10.100.0.14'], port_security=['fa:16:3e:ba:24:f3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c44627c6-7bd8-4e1a-b32f-a79f70a179c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '633027d5948949cdb842dbb20e321e57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5471dc17-cb49-4ef7-8622-745d4a93a7ff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8373a8b6-48b7-4c53-8c59-c606fca3db1d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=55e4f905-eb1b-4b14-ab56-c88a38fe3b3d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:49:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:34.359 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 55e4f905-eb1b-4b14-ab56-c88a38fe3b3d in datapath ea784d9f-5fea-4b2f-8a0a-4232f32d0fff unbound from our chassis#033[00m
Oct 10 23:49:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:34.360 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ea784d9f-5fea-4b2f-8a0a-4232f32d0fff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:49:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:34.362 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e40695-b8f7-4ead-92b8-25f6a9dfd720]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:34.364 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff namespace which is not needed anymore#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:34 np0005480824 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct 10 23:49:34 np0005480824 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 15.449s CPU time.
Oct 10 23:49:34 np0005480824 systemd-machined[215071]: Machine qemu-5-instance-00000005 terminated.
Oct 10 23:49:34 np0005480824 podman[274441]: 2025-10-11 03:49:34.440392975 +0000 UTC m=+0.071857296 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 10 23:49:34 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[272870]: [NOTICE]   (272874) : haproxy version is 2.8.14-c23fe91
Oct 10 23:49:34 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[272870]: [NOTICE]   (272874) : path to executable is /usr/sbin/haproxy
Oct 10 23:49:34 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[272870]: [WARNING]  (272874) : Exiting Master process...
Oct 10 23:49:34 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[272870]: [WARNING]  (272874) : Exiting Master process...
Oct 10 23:49:34 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[272870]: [ALERT]    (272874) : Current worker (272876) exited with code 143 (Terminated)
Oct 10 23:49:34 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[272870]: [WARNING]  (272874) : All workers exited. Exiting... (0)
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.510 2 INFO nova.virt.libvirt.driver [-] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Instance destroyed successfully.#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.511 2 DEBUG nova.objects.instance [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lazy-loading 'resources' on Instance uuid c44627c6-7bd8-4e1a-b32f-a79f70a179c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:49:34 np0005480824 systemd[1]: libpod-1b52ff80222ba0400e3b1c1b6eed85363af90dd1dae8e9e1b97acce16fb2444d.scope: Deactivated successfully.
Oct 10 23:49:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:49:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Oct 10 23:49:34 np0005480824 podman[274481]: 2025-10-11 03:49:34.516521621 +0000 UTC m=+0.047752877 container died 1b52ff80222ba0400e3b1c1b6eed85363af90dd1dae8e9e1b97acce16fb2444d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:49:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Oct 10 23:49:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.526 2 DEBUG nova.virt.libvirt.vif [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:48:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1672646305',display_name='tempest-VolumesSnapshotTestJSON-instance-1672646305',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1672646305',id=5,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGjSEbonl6tZjtw1C/AABmPqvkeq5PpV9hTO7gHpXSefMJGvfTYI4QKUmM4JngFk81DlCC3Tw4aEvHRSSap1ox2HtHhGxo+WU8LAjNe6fep/hQC/OtxUQtC8mrtkIIwvag==',key_name='tempest-keypair-2023164038',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:49:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='633027d5948949cdb842dbb20e321e57',ramdisk_id='',reservation_id='r-gfwbfpiz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-62208921',owner_user_name='tempest-VolumesSnapshotTestJSON-62208921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:49:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d2bb1c00b7ba4686bb710314548ea5af',uuid=c44627c6-7bd8-4e1a-b32f-a79f70a179c7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "address": "fa:16:3e:ba:24:f3", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55e4f905-eb", "ovs_interfaceid": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.529 2 DEBUG nova.network.os_vif_util [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Converting VIF {"id": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "address": "fa:16:3e:ba:24:f3", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55e4f905-eb", "ovs_interfaceid": "55e4f905-eb1b-4b14-ab56-c88a38fe3b3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.530 2 DEBUG nova.network.os_vif_util [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:24:f3,bridge_name='br-int',has_traffic_filtering=True,id=55e4f905-eb1b-4b14-ab56-c88a38fe3b3d,network=Network(ea784d9f-5fea-4b2f-8a0a-4232f32d0fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55e4f905-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.530 2 DEBUG os_vif [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:24:f3,bridge_name='br-int',has_traffic_filtering=True,id=55e4f905-eb1b-4b14-ab56-c88a38fe3b3d,network=Network(ea784d9f-5fea-4b2f-8a0a-4232f32d0fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55e4f905-eb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55e4f905-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.540 2 INFO os_vif [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:24:f3,bridge_name='br-int',has_traffic_filtering=True,id=55e4f905-eb1b-4b14-ab56-c88a38fe3b3d,network=Network(ea784d9f-5fea-4b2f-8a0a-4232f32d0fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55e4f905-eb')#033[00m
Oct 10 23:49:34 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b52ff80222ba0400e3b1c1b6eed85363af90dd1dae8e9e1b97acce16fb2444d-userdata-shm.mount: Deactivated successfully.
Oct 10 23:49:34 np0005480824 systemd[1]: var-lib-containers-storage-overlay-8d9ebb4bb005ca34de47d84318748148eda7b0ad425ee23099cc8c280fb9c89b-merged.mount: Deactivated successfully.
Oct 10 23:49:34 np0005480824 podman[274481]: 2025-10-11 03:49:34.5847544 +0000 UTC m=+0.115985666 container cleanup 1b52ff80222ba0400e3b1c1b6eed85363af90dd1dae8e9e1b97acce16fb2444d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 10 23:49:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 317 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 617 KiB/s rd, 57 KiB/s wr, 248 op/s
Oct 10 23:49:34 np0005480824 systemd[1]: libpod-conmon-1b52ff80222ba0400e3b1c1b6eed85363af90dd1dae8e9e1b97acce16fb2444d.scope: Deactivated successfully.
Oct 10 23:49:34 np0005480824 podman[274539]: 2025-10-11 03:49:34.683673983 +0000 UTC m=+0.061594124 container remove 1b52ff80222ba0400e3b1c1b6eed85363af90dd1dae8e9e1b97acce16fb2444d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 10 23:49:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:34.695 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8f691285-11af-4fe2-9cdb-4e86776aca17]: (4, ('Sat Oct 11 03:49:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff (1b52ff80222ba0400e3b1c1b6eed85363af90dd1dae8e9e1b97acce16fb2444d)\n1b52ff80222ba0400e3b1c1b6eed85363af90dd1dae8e9e1b97acce16fb2444d\nSat Oct 11 03:49:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff (1b52ff80222ba0400e3b1c1b6eed85363af90dd1dae8e9e1b97acce16fb2444d)\n1b52ff80222ba0400e3b1c1b6eed85363af90dd1dae8e9e1b97acce16fb2444d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:34.698 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1301c6a0-14ab-44e7-ad0f-9645a347fbfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:34.699 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea784d9f-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:34 np0005480824 kernel: tapea784d9f-50: left promiscuous mode
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.731 2 DEBUG nova.compute.manager [req-56162cd2-f06c-49f0-b5cd-0d0a98535516 req-13fafcfd-8d7b-4b2d-95fb-968157a7d404 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Received event network-vif-unplugged-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.731 2 DEBUG oslo_concurrency.lockutils [req-56162cd2-f06c-49f0-b5cd-0d0a98535516 req-13fafcfd-8d7b-4b2d-95fb-968157a7d404 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.732 2 DEBUG oslo_concurrency.lockutils [req-56162cd2-f06c-49f0-b5cd-0d0a98535516 req-13fafcfd-8d7b-4b2d-95fb-968157a7d404 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.732 2 DEBUG oslo_concurrency.lockutils [req-56162cd2-f06c-49f0-b5cd-0d0a98535516 req-13fafcfd-8d7b-4b2d-95fb-968157a7d404 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.732 2 DEBUG nova.compute.manager [req-56162cd2-f06c-49f0-b5cd-0d0a98535516 req-13fafcfd-8d7b-4b2d-95fb-968157a7d404 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] No waiting events found dispatching network-vif-unplugged-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.733 2 DEBUG nova.compute.manager [req-56162cd2-f06c-49f0-b5cd-0d0a98535516 req-13fafcfd-8d7b-4b2d-95fb-968157a7d404 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Received event network-vif-unplugged-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:34.744 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[53763ecc-9992-4a13-8aed-7a6f2a02bb03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:34.772 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d506c17d-6030-4928-9a81-16f8ac94f474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:34.774 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8f0f0418-93fc-4f37-9937-4267e46705df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:34.795 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2886b68a-8eca-4b10-8df6-e895be4fac6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400611, 'reachable_time': 40613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274554, 'error': None, 'target': 'ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:34 np0005480824 systemd[1]: run-netns-ovnmeta\x2dea784d9f\x2d5fea\x2d4b2f\x2d8a0a\x2d4232f32d0fff.mount: Deactivated successfully.
Oct 10 23:49:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:34.801 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:49:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:34.801 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[7e71fc7f-d538-4a70-92f0-82e5702ae9cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.992 2 INFO nova.virt.libvirt.driver [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Deleting instance files /var/lib/nova/instances/c44627c6-7bd8-4e1a-b32f-a79f70a179c7_del#033[00m
Oct 10 23:49:34 np0005480824 nova_compute[260089]: 2025-10-11 03:49:34.994 2 INFO nova.virt.libvirt.driver [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Deletion of /var/lib/nova/instances/c44627c6-7bd8-4e1a-b32f-a79f70a179c7_del complete#033[00m
Oct 10 23:49:35 np0005480824 nova_compute[260089]: 2025-10-11 03:49:35.005 2 DEBUG nova.compute.manager [req-08aa82ea-f5f3-42c4-92d4-e20e08ab734c req-8ca85815-7412-441c-9bc9-44d650e70b70 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Received event network-changed-c6b37039-6a92-4786-8d2b-febe3f3e7716 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:49:35 np0005480824 nova_compute[260089]: 2025-10-11 03:49:35.006 2 DEBUG nova.compute.manager [req-08aa82ea-f5f3-42c4-92d4-e20e08ab734c req-8ca85815-7412-441c-9bc9-44d650e70b70 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Refreshing instance network info cache due to event network-changed-c6b37039-6a92-4786-8d2b-febe3f3e7716. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:49:35 np0005480824 nova_compute[260089]: 2025-10-11 03:49:35.007 2 DEBUG oslo_concurrency.lockutils [req-08aa82ea-f5f3-42c4-92d4-e20e08ab734c req-8ca85815-7412-441c-9bc9-44d650e70b70 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-266aeb27-7f54-4255-9018-0b6092629b80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:49:35 np0005480824 nova_compute[260089]: 2025-10-11 03:49:35.007 2 DEBUG oslo_concurrency.lockutils [req-08aa82ea-f5f3-42c4-92d4-e20e08ab734c req-8ca85815-7412-441c-9bc9-44d650e70b70 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-266aeb27-7f54-4255-9018-0b6092629b80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:49:35 np0005480824 nova_compute[260089]: 2025-10-11 03:49:35.008 2 DEBUG nova.network.neutron [req-08aa82ea-f5f3-42c4-92d4-e20e08ab734c req-8ca85815-7412-441c-9bc9-44d650e70b70 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Refreshing network info cache for port c6b37039-6a92-4786-8d2b-febe3f3e7716 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:49:35 np0005480824 nova_compute[260089]: 2025-10-11 03:49:35.057 2 INFO nova.compute.manager [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:49:35 np0005480824 nova_compute[260089]: 2025-10-11 03:49:35.057 2 DEBUG oslo.service.loopingcall [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:49:35 np0005480824 nova_compute[260089]: 2025-10-11 03:49:35.059 2 DEBUG nova.compute.manager [-] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:49:35 np0005480824 nova_compute[260089]: 2025-10-11 03:49:35.059 2 DEBUG nova.network.neutron [-] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:49:35 np0005480824 nova_compute[260089]: 2025-10-11 03:49:35.568 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:49:35 np0005480824 nova_compute[260089]: 2025-10-11 03:49:35.606 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:49:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 317 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 491 KiB/s rd, 45 KiB/s wr, 197 op/s
Oct 10 23:49:36 np0005480824 nova_compute[260089]: 2025-10-11 03:49:36.808 2 DEBUG nova.compute.manager [req-6424c4dd-15b0-4255-a01f-43d865423992 req-4729fa7f-1b36-4f31-bf0e-3c6caaaef82d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Received event network-vif-plugged-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:49:36 np0005480824 nova_compute[260089]: 2025-10-11 03:49:36.808 2 DEBUG oslo_concurrency.lockutils [req-6424c4dd-15b0-4255-a01f-43d865423992 req-4729fa7f-1b36-4f31-bf0e-3c6caaaef82d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:36 np0005480824 nova_compute[260089]: 2025-10-11 03:49:36.809 2 DEBUG oslo_concurrency.lockutils [req-6424c4dd-15b0-4255-a01f-43d865423992 req-4729fa7f-1b36-4f31-bf0e-3c6caaaef82d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:36 np0005480824 nova_compute[260089]: 2025-10-11 03:49:36.810 2 DEBUG oslo_concurrency.lockutils [req-6424c4dd-15b0-4255-a01f-43d865423992 req-4729fa7f-1b36-4f31-bf0e-3c6caaaef82d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:36 np0005480824 nova_compute[260089]: 2025-10-11 03:49:36.810 2 DEBUG nova.compute.manager [req-6424c4dd-15b0-4255-a01f-43d865423992 req-4729fa7f-1b36-4f31-bf0e-3c6caaaef82d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] No waiting events found dispatching network-vif-plugged-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:49:36 np0005480824 nova_compute[260089]: 2025-10-11 03:49:36.810 2 WARNING nova.compute.manager [req-6424c4dd-15b0-4255-a01f-43d865423992 req-4729fa7f-1b36-4f31-bf0e-3c6caaaef82d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Received unexpected event network-vif-plugged-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d for instance with vm_state active and task_state deleting.#033[00m
Oct 10 23:49:37 np0005480824 nova_compute[260089]: 2025-10-11 03:49:37.750 2 DEBUG nova.network.neutron [-] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:49:37 np0005480824 nova_compute[260089]: 2025-10-11 03:49:37.780 2 INFO nova.compute.manager [-] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Took 2.72 seconds to deallocate network for instance.#033[00m
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011087675815588981 of space, bias 1.0, pg target 0.33263027446766946 quantized to 32 (current 32)
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034688731116103405 of space, bias 1.0, pg target 0.10406619334831022 quantized to 32 (current 32)
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:49:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:49:37 np0005480824 nova_compute[260089]: 2025-10-11 03:49:37.989 2 DEBUG nova.network.neutron [req-08aa82ea-f5f3-42c4-92d4-e20e08ab734c req-8ca85815-7412-441c-9bc9-44d650e70b70 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Updated VIF entry in instance network info cache for port c6b37039-6a92-4786-8d2b-febe3f3e7716. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:49:37 np0005480824 nova_compute[260089]: 2025-10-11 03:49:37.990 2 DEBUG nova.network.neutron [req-08aa82ea-f5f3-42c4-92d4-e20e08ab734c req-8ca85815-7412-441c-9bc9-44d650e70b70 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Updating instance_info_cache with network_info: [{"id": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "address": "fa:16:3e:3f:63:b3", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6b37039-6a", "ovs_interfaceid": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:49:38 np0005480824 nova_compute[260089]: 2025-10-11 03:49:38.019 2 WARNING nova.volume.cinder [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Attachment c9d88990-51fd-4de1-aca3-7a985c24aad7 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = c9d88990-51fd-4de1-aca3-7a985c24aad7. (HTTP 404) (Request-ID: req-3dfce1fd-2e81-42c7-b28a-1e86797e678d)#033[00m
Oct 10 23:49:38 np0005480824 nova_compute[260089]: 2025-10-11 03:49:38.020 2 INFO nova.compute.manager [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Took 0.24 seconds to detach 1 volumes for instance.#033[00m
Oct 10 23:49:38 np0005480824 nova_compute[260089]: 2025-10-11 03:49:38.028 2 DEBUG oslo_concurrency.lockutils [req-08aa82ea-f5f3-42c4-92d4-e20e08ab734c req-8ca85815-7412-441c-9bc9-44d650e70b70 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-266aeb27-7f54-4255-9018-0b6092629b80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:49:38 np0005480824 nova_compute[260089]: 2025-10-11 03:49:38.075 2 DEBUG oslo_concurrency.lockutils [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:38 np0005480824 nova_compute[260089]: 2025-10-11 03:49:38.076 2 DEBUG oslo_concurrency.lockutils [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:38 np0005480824 nova_compute[260089]: 2025-10-11 03:49:38.139 2 DEBUG oslo_concurrency.processutils [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:38 np0005480824 nova_compute[260089]: 2025-10-11 03:49:38.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 6.3 KiB/s wr, 270 op/s
Oct 10 23:49:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:49:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2493579301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:49:38 np0005480824 nova_compute[260089]: 2025-10-11 03:49:38.638 2 DEBUG oslo_concurrency.processutils [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:38 np0005480824 nova_compute[260089]: 2025-10-11 03:49:38.645 2 DEBUG nova.compute.provider_tree [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:49:38 np0005480824 nova_compute[260089]: 2025-10-11 03:49:38.660 2 DEBUG nova.scheduler.client.report [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:49:38 np0005480824 nova_compute[260089]: 2025-10-11 03:49:38.675 2 DEBUG oslo_concurrency.lockutils [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:38 np0005480824 nova_compute[260089]: 2025-10-11 03:49:38.698 2 INFO nova.scheduler.client.report [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Deleted allocations for instance c44627c6-7bd8-4e1a-b32f-a79f70a179c7#033[00m
Oct 10 23:49:38 np0005480824 nova_compute[260089]: 2025-10-11 03:49:38.763 2 DEBUG oslo_concurrency.lockutils [None req-7b4396c5-50ca-489c-a273-5f358c94b378 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "c44627c6-7bd8-4e1a-b32f-a79f70a179c7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:38 np0005480824 nova_compute[260089]: 2025-10-11 03:49:38.966 2 DEBUG nova.compute.manager [req-63e69fed-e93f-472e-b5ad-06baacfa5527 req-0415daf6-4e9f-47c9-9231-8e1f78239e8f 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Received event network-vif-deleted-55e4f905-eb1b-4b14-ab56-c88a38fe3b3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:49:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:49:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Oct 10 23:49:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Oct 10 23:49:39 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Oct 10 23:49:39 np0005480824 nova_compute[260089]: 2025-10-11 03:49:39.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.2 KiB/s wr, 184 op/s
Oct 10 23:49:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:49:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2051570783' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:49:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:49:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2051570783' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:49:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 106 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.1 KiB/s wr, 176 op/s
Oct 10 23:49:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Oct 10 23:49:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Oct 10 23:49:42 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Oct 10 23:49:43 np0005480824 nova_compute[260089]: 2025-10-11 03:49:43.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Oct 10 23:49:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Oct 10 23:49:43 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Oct 10 23:49:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:49:44 np0005480824 nova_compute[260089]: 2025-10-11 03:49:44.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 88 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Oct 10 23:49:44 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:44Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:63:b3 10.100.0.11
Oct 10 23:49:44 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:44Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:63:b3 10.100.0.11
Oct 10 23:49:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Oct 10 23:49:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Oct 10 23:49:45 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Oct 10 23:49:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 88 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Oct 10 23:49:48 np0005480824 nova_compute[260089]: 2025-10-11 03:49:48.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.3 MiB/s wr, 194 op/s
Oct 10 23:49:49 np0005480824 nova_compute[260089]: 2025-10-11 03:49:49.507 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154574.5055377, c44627c6-7bd8-4e1a-b32f-a79f70a179c7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:49:49 np0005480824 nova_compute[260089]: 2025-10-11 03:49:49.507 2 INFO nova.compute.manager [-] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:49:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:49:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Oct 10 23:49:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Oct 10 23:49:49 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Oct 10 23:49:49 np0005480824 nova_compute[260089]: 2025-10-11 03:49:49.532 2 DEBUG nova.compute.manager [None req-c30a97fc-5cc9-446b-9cf8-8e477ba3ccb7 - - - - - -] [instance: c44627c6-7bd8-4e1a-b32f-a79f70a179c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:49:49 np0005480824 nova_compute[260089]: 2025-10-11 03:49:49.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.8 MiB/s wr, 160 op/s
Oct 10 23:49:51 np0005480824 podman[274580]: 2025-10-11 03:49:51.054786898 +0000 UTC m=+0.090439985 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid)
Oct 10 23:49:51 np0005480824 podman[274579]: 2025-10-11 03:49:51.091156395 +0000 UTC m=+0.132590338 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.122 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "ade49b15-ded3-459c-b92d-e98380bca4a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.123 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.146 2 DEBUG nova.compute.manager [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.238 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.239 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.248 2 DEBUG nova.virt.hardware [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.248 2 INFO nova.compute.claims [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.379 2 DEBUG oslo_concurrency.processutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:49:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367143107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.808 2 DEBUG oslo_concurrency.processutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.814 2 DEBUG nova.compute.provider_tree [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.841 2 DEBUG nova.scheduler.client.report [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.873 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.873 2 DEBUG nova.compute.manager [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.915 2 DEBUG nova.compute.manager [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.915 2 DEBUG nova.network.neutron [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.932 2 INFO nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:49:51 np0005480824 nova_compute[260089]: 2025-10-11 03:49:51.952 2 DEBUG nova.compute.manager [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.050 2 DEBUG nova.compute.manager [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.052 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.053 2 INFO nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Creating image(s)#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.088 2 DEBUG nova.storage.rbd_utils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] rbd image ade49b15-ded3-459c-b92d-e98380bca4a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.117 2 DEBUG nova.storage.rbd_utils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] rbd image ade49b15-ded3-459c-b92d-e98380bca4a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.143 2 DEBUG nova.storage.rbd_utils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] rbd image ade49b15-ded3-459c-b92d-e98380bca4a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.147 2 DEBUG oslo_concurrency.processutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.224 2 DEBUG oslo_concurrency.processutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.226 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.227 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.227 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.252 2 DEBUG nova.storage.rbd_utils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] rbd image ade49b15-ded3-459c-b92d-e98380bca4a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.257 2 DEBUG oslo_concurrency.processutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e ade49b15-ded3-459c-b92d-e98380bca4a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.298 2 DEBUG nova.policy [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd2bb1c00b7ba4686bb710314548ea5af', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '633027d5948949cdb842dbb20e321e57', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:49:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:49:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3656142843' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:49:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:49:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3656142843' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.541 2 DEBUG oslo_concurrency.processutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e ade49b15-ded3-459c-b92d-e98380bca4a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 141 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.2 MiB/s wr, 180 op/s
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.635 2 DEBUG nova.storage.rbd_utils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] resizing rbd image ade49b15-ded3-459c-b92d-e98380bca4a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.767 2 DEBUG nova.objects.instance [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lazy-loading 'migration_context' on Instance uuid ade49b15-ded3-459c-b92d-e98380bca4a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.785 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.785 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Ensure instance console log exists: /var/lib/nova/instances/ade49b15-ded3-459c-b92d-e98380bca4a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.786 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.787 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:52 np0005480824 nova_compute[260089]: 2025-10-11 03:49:52.788 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:53 np0005480824 nova_compute[260089]: 2025-10-11 03:49:53.108 2 DEBUG nova.network.neutron [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Successfully created port: a8669959-69cb-4e7c-b708-25e90497b585 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:49:53 np0005480824 nova_compute[260089]: 2025-10-11 03:49:53.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:53 np0005480824 nova_compute[260089]: 2025-10-11 03:49:53.914 2 DEBUG nova.network.neutron [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Successfully updated port: a8669959-69cb-4e7c-b708-25e90497b585 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:49:53 np0005480824 nova_compute[260089]: 2025-10-11 03:49:53.956 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "refresh_cache-ade49b15-ded3-459c-b92d-e98380bca4a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:49:53 np0005480824 nova_compute[260089]: 2025-10-11 03:49:53.956 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquired lock "refresh_cache-ade49b15-ded3-459c-b92d-e98380bca4a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:49:53 np0005480824 nova_compute[260089]: 2025-10-11 03:49:53.957 2 DEBUG nova.network.neutron [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:49:54 np0005480824 nova_compute[260089]: 2025-10-11 03:49:54.047 2 DEBUG nova.compute.manager [req-c075fdd8-351f-41ff-99b5-f0a884bb789a req-e95a9374-bf44-4d3e-a737-ab66f1a79db4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Received event network-changed-a8669959-69cb-4e7c-b708-25e90497b585 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:49:54 np0005480824 nova_compute[260089]: 2025-10-11 03:49:54.048 2 DEBUG nova.compute.manager [req-c075fdd8-351f-41ff-99b5-f0a884bb789a req-e95a9374-bf44-4d3e-a737-ab66f1a79db4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Refreshing instance network info cache due to event network-changed-a8669959-69cb-4e7c-b708-25e90497b585. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:49:54 np0005480824 nova_compute[260089]: 2025-10-11 03:49:54.048 2 DEBUG oslo_concurrency.lockutils [req-c075fdd8-351f-41ff-99b5-f0a884bb789a req-e95a9374-bf44-4d3e-a737-ab66f1a79db4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-ade49b15-ded3-459c-b92d-e98380bca4a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:49:54 np0005480824 nova_compute[260089]: 2025-10-11 03:49:54.303 2 DEBUG nova.network.neutron [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:49:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:49:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 156 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.7 MiB/s wr, 212 op/s
Oct 10 23:49:54 np0005480824 nova_compute[260089]: 2025-10-11 03:49:54.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.322 2 DEBUG nova.network.neutron [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Updating instance_info_cache with network_info: [{"id": "a8669959-69cb-4e7c-b708-25e90497b585", "address": "fa:16:3e:23:e8:5f", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8669959-69", "ovs_interfaceid": "a8669959-69cb-4e7c-b708-25e90497b585", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.342 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Releasing lock "refresh_cache-ade49b15-ded3-459c-b92d-e98380bca4a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.343 2 DEBUG nova.compute.manager [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Instance network_info: |[{"id": "a8669959-69cb-4e7c-b708-25e90497b585", "address": "fa:16:3e:23:e8:5f", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8669959-69", "ovs_interfaceid": "a8669959-69cb-4e7c-b708-25e90497b585", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.343 2 DEBUG oslo_concurrency.lockutils [req-c075fdd8-351f-41ff-99b5-f0a884bb789a req-e95a9374-bf44-4d3e-a737-ab66f1a79db4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-ade49b15-ded3-459c-b92d-e98380bca4a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.343 2 DEBUG nova.network.neutron [req-c075fdd8-351f-41ff-99b5-f0a884bb789a req-e95a9374-bf44-4d3e-a737-ab66f1a79db4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Refreshing network info cache for port a8669959-69cb-4e7c-b708-25e90497b585 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.346 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Start _get_guest_xml network_info=[{"id": "a8669959-69cb-4e7c-b708-25e90497b585", "address": "fa:16:3e:23:e8:5f", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8669959-69", "ovs_interfaceid": "a8669959-69cb-4e7c-b708-25e90497b585", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.351 2 WARNING nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.357 2 DEBUG nova.virt.libvirt.host [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.357 2 DEBUG nova.virt.libvirt.host [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.364 2 DEBUG nova.virt.libvirt.host [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.364 2 DEBUG nova.virt.libvirt.host [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.365 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.365 2 DEBUG nova.virt.hardware [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.365 2 DEBUG nova.virt.hardware [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.365 2 DEBUG nova.virt.hardware [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.366 2 DEBUG nova.virt.hardware [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.366 2 DEBUG nova.virt.hardware [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.366 2 DEBUG nova.virt.hardware [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.366 2 DEBUG nova.virt.hardware [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.367 2 DEBUG nova.virt.hardware [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.367 2 DEBUG nova.virt.hardware [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.367 2 DEBUG nova.virt.hardware [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.367 2 DEBUG nova.virt.hardware [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.370 2 DEBUG oslo_concurrency.processutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:49:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3328927524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.811 2 DEBUG oslo_concurrency.processutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.843 2 DEBUG nova.storage.rbd_utils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] rbd image ade49b15-ded3-459c-b92d-e98380bca4a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:49:55 np0005480824 nova_compute[260089]: 2025-10-11 03:49:55.847 2 DEBUG oslo_concurrency.processutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:49:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/223873533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.331 2 DEBUG oslo_concurrency.processutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.335 2 DEBUG nova.virt.libvirt.vif [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:49:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1978752910',display_name='tempest-VolumesSnapshotTestJSON-instance-1978752910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1978752910',id=7,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMU8TCWLsVgu9X2ftp+Ng4IlphsYIWSdZw5JSMp7bjp02XLW3tAVs9W9/OXkfeMr9/+RjE/RYUYyzgUoj2YF/yumt6KiJd52M+1yL9i3IcErJEAiSBWGAJXyrEDA+yRBvw==',key_name='tempest-keypair-1420679357',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='633027d5948949cdb842dbb20e321e57',ramdisk_id='',reservation_id='r-d06lmbi6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-62208921',owner_user_name='tempest-VolumesSnapshotTestJSON-62208921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:49:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d2bb1c00b7ba4686bb710314548ea5af',uuid=ade49b15-ded3-459c-b92d-e98380bca4a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8669959-69cb-4e7c-b708-25e90497b585", "address": "fa:16:3e:23:e8:5f", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8669959-69", "ovs_interfaceid": "a8669959-69cb-4e7c-b708-25e90497b585", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.336 2 DEBUG nova.network.os_vif_util [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Converting VIF {"id": "a8669959-69cb-4e7c-b708-25e90497b585", "address": "fa:16:3e:23:e8:5f", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8669959-69", "ovs_interfaceid": "a8669959-69cb-4e7c-b708-25e90497b585", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.337 2 DEBUG nova.network.os_vif_util [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:e8:5f,bridge_name='br-int',has_traffic_filtering=True,id=a8669959-69cb-4e7c-b708-25e90497b585,network=Network(ea784d9f-5fea-4b2f-8a0a-4232f32d0fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8669959-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.339 2 DEBUG nova.objects.instance [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lazy-loading 'pci_devices' on Instance uuid ade49b15-ded3-459c-b92d-e98380bca4a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.365 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  <uuid>ade49b15-ded3-459c-b92d-e98380bca4a4</uuid>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  <name>instance-00000007</name>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <nova:name>tempest-VolumesSnapshotTestJSON-instance-1978752910</nova:name>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:49:55</nova:creationTime>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:        <nova:user uuid="d2bb1c00b7ba4686bb710314548ea5af">tempest-VolumesSnapshotTestJSON-62208921-project-member</nova:user>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:        <nova:project uuid="633027d5948949cdb842dbb20e321e57">tempest-VolumesSnapshotTestJSON-62208921</nova:project>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:        <nova:port uuid="a8669959-69cb-4e7c-b708-25e90497b585">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <entry name="serial">ade49b15-ded3-459c-b92d-e98380bca4a4</entry>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <entry name="uuid">ade49b15-ded3-459c-b92d-e98380bca4a4</entry>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/ade49b15-ded3-459c-b92d-e98380bca4a4_disk">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/ade49b15-ded3-459c-b92d-e98380bca4a4_disk.config">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:23:e8:5f"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <target dev="tapa8669959-69"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/ade49b15-ded3-459c-b92d-e98380bca4a4/console.log" append="off"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:49:56 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:49:56 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:49:56 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:49:56 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.367 2 DEBUG nova.compute.manager [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Preparing to wait for external event network-vif-plugged-a8669959-69cb-4e7c-b708-25e90497b585 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.368 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.368 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.368 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.370 2 DEBUG nova.virt.libvirt.vif [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:49:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1978752910',display_name='tempest-VolumesSnapshotTestJSON-instance-1978752910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1978752910',id=7,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMU8TCWLsVgu9X2ftp+Ng4IlphsYIWSdZw5JSMp7bjp02XLW3tAVs9W9/OXkfeMr9/+RjE/RYUYyzgUoj2YF/yumt6KiJd52M+1yL9i3IcErJEAiSBWGAJXyrEDA+yRBvw==',key_name='tempest-keypair-1420679357',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='633027d5948949cdb842dbb20e321e57',ramdisk_id='',reservation_id='r-d06lmbi6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-62208921',owner_user_name='tempest-VolumesSnapshotTestJSON-62208921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:49:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d2bb1c00b7ba4686bb710314548ea5af',uuid=ade49b15-ded3-459c-b92d-e98380bca4a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8669959-69cb-4e7c-b708-25e90497b585", "address": "fa:16:3e:23:e8:5f", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8669959-69", "ovs_interfaceid": "a8669959-69cb-4e7c-b708-25e90497b585", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.370 2 DEBUG nova.network.os_vif_util [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Converting VIF {"id": "a8669959-69cb-4e7c-b708-25e90497b585", "address": "fa:16:3e:23:e8:5f", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8669959-69", "ovs_interfaceid": "a8669959-69cb-4e7c-b708-25e90497b585", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.372 2 DEBUG nova.network.os_vif_util [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:e8:5f,bridge_name='br-int',has_traffic_filtering=True,id=a8669959-69cb-4e7c-b708-25e90497b585,network=Network(ea784d9f-5fea-4b2f-8a0a-4232f32d0fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8669959-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.372 2 DEBUG os_vif [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:e8:5f,bridge_name='br-int',has_traffic_filtering=True,id=a8669959-69cb-4e7c-b708-25e90497b585,network=Network(ea784d9f-5fea-4b2f-8a0a-4232f32d0fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8669959-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.374 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.375 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.380 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8669959-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa8669959-69, col_values=(('external_ids', {'iface-id': 'a8669959-69cb-4e7c-b708-25e90497b585', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:e8:5f', 'vm-uuid': 'ade49b15-ded3-459c-b92d-e98380bca4a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:56 np0005480824 NetworkManager[44969]: <info>  [1760154596.3886] manager: (tapa8669959-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.398 2 INFO os_vif [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:e8:5f,bridge_name='br-int',has_traffic_filtering=True,id=a8669959-69cb-4e7c-b708-25e90497b585,network=Network(ea784d9f-5fea-4b2f-8a0a-4232f32d0fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8669959-69')#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.474 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.475 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.476 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] No VIF found with MAC fa:16:3e:23:e8:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.477 2 INFO nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Using config drive#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.507 2 DEBUG nova.storage.rbd_utils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] rbd image ade49b15-ded3-459c-b92d-e98380bca4a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:49:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 156 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.9 MiB/s wr, 186 op/s
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.805 2 DEBUG nova.network.neutron [req-c075fdd8-351f-41ff-99b5-f0a884bb789a req-e95a9374-bf44-4d3e-a737-ab66f1a79db4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Updated VIF entry in instance network info cache for port a8669959-69cb-4e7c-b708-25e90497b585. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.805 2 DEBUG nova.network.neutron [req-c075fdd8-351f-41ff-99b5-f0a884bb789a req-e95a9374-bf44-4d3e-a737-ab66f1a79db4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Updating instance_info_cache with network_info: [{"id": "a8669959-69cb-4e7c-b708-25e90497b585", "address": "fa:16:3e:23:e8:5f", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8669959-69", "ovs_interfaceid": "a8669959-69cb-4e7c-b708-25e90497b585", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.814 2 INFO nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Creating config drive at /var/lib/nova/instances/ade49b15-ded3-459c-b92d-e98380bca4a4/disk.config#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.826 2 DEBUG oslo_concurrency.processutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ade49b15-ded3-459c-b92d-e98380bca4a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu6mm01jh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.862 2 DEBUG oslo_concurrency.lockutils [req-c075fdd8-351f-41ff-99b5-f0a884bb789a req-e95a9374-bf44-4d3e-a737-ab66f1a79db4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-ade49b15-ded3-459c-b92d-e98380bca4a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:49:56 np0005480824 nova_compute[260089]: 2025-10-11 03:49:56.977 2 DEBUG oslo_concurrency.processutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ade49b15-ded3-459c-b92d-e98380bca4a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu6mm01jh" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.020 2 DEBUG nova.storage.rbd_utils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] rbd image ade49b15-ded3-459c-b92d-e98380bca4a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.026 2 DEBUG oslo_concurrency.processutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ade49b15-ded3-459c-b92d-e98380bca4a4/disk.config ade49b15-ded3-459c-b92d-e98380bca4a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.226 2 DEBUG oslo_concurrency.processutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ade49b15-ded3-459c-b92d-e98380bca4a4/disk.config ade49b15-ded3-459c-b92d-e98380bca4a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.228 2 INFO nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Deleting local config drive /var/lib/nova/instances/ade49b15-ded3-459c-b92d-e98380bca4a4/disk.config because it was imported into RBD.#033[00m
Oct 10 23:49:57 np0005480824 kernel: tapa8669959-69: entered promiscuous mode
Oct 10 23:49:57 np0005480824 NetworkManager[44969]: <info>  [1760154597.3052] manager: (tapa8669959-69): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Oct 10 23:49:57 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:57Z|00077|binding|INFO|Claiming lport a8669959-69cb-4e7c-b708-25e90497b585 for this chassis.
Oct 10 23:49:57 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:57Z|00078|binding|INFO|a8669959-69cb-4e7c-b708-25e90497b585: Claiming fa:16:3e:23:e8:5f 10.100.0.14
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:57 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:57Z|00079|binding|INFO|Setting lport a8669959-69cb-4e7c-b708-25e90497b585 ovn-installed in OVS
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:57 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:57Z|00080|binding|INFO|Setting lport a8669959-69cb-4e7c-b708-25e90497b585 up in Southbound
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.355 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:e8:5f 10.100.0.14'], port_security=['fa:16:3e:23:e8:5f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ade49b15-ded3-459c-b92d-e98380bca4a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '633027d5948949cdb842dbb20e321e57', 'neutron:revision_number': '2', 'neutron:security_group_ids': '876c3a36-fc02-41b6-9ce4-5e10b6cd49ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8373a8b6-48b7-4c53-8c59-c606fca3db1d, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=a8669959-69cb-4e7c-b708-25e90497b585) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.357 162245 INFO neutron.agent.ovn.metadata.agent [-] Port a8669959-69cb-4e7c-b708-25e90497b585 in datapath ea784d9f-5fea-4b2f-8a0a-4232f32d0fff bound to our chassis#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.359 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ea784d9f-5fea-4b2f-8a0a-4232f32d0fff#033[00m
Oct 10 23:49:57 np0005480824 systemd-udevd[274941]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:49:57 np0005480824 systemd-machined[215071]: New machine qemu-7-instance-00000007.
Oct 10 23:49:57 np0005480824 NetworkManager[44969]: <info>  [1760154597.3840] device (tapa8669959-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:49:57 np0005480824 NetworkManager[44969]: <info>  [1760154597.3851] device (tapa8669959-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.383 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[be07eeb3-8022-4652-9297-0c1e2956876e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.385 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapea784d9f-51 in ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.388 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapea784d9f-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.389 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4be335-b5c8-4cc5-bab6-84b3c22e7753]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.390 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0d1bb6-f0d5-4d34-9007-9c5628482d41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.419 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[9380d0cf-3c38-418c-a2d5-a08d3d15a24c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.444 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b66136fa-f550-4eb7-b281-b3bbe6148203]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.486 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[0d20f901-e4bb-4bb8-8ef9-298d37afc7b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 NetworkManager[44969]: <info>  [1760154597.4941] manager: (tapea784d9f-50): new Veth device (/org/freedesktop/NetworkManager/Devices/55)
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.495 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4d218b90-dc90-4a25-adbe-6b7b83291400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 systemd-udevd[274945]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.545 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[51bc948a-e386-4d78-b50e-82bc415e2143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.549 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[59a59481-fbb7-4570-aa44-24c3388da079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 NetworkManager[44969]: <info>  [1760154597.5831] device (tapea784d9f-50): carrier: link connected
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.590 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[0a698eb6-3232-4e6d-8c3f-ba7989719d0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.616 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7d8f37a7-76ea-48e5-8211-96f936e635f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea784d9f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:b1:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406524, 'reachable_time': 20494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274975, 'error': None, 'target': 'ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.638 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6467ac-717a-4457-b2d6-5989cbc18a3f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee3:b16f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 406524, 'tstamp': 406524}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274976, 'error': None, 'target': 'ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.664 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4f23a926-b5bf-40ab-8dc9-ca3486ddda06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapea784d9f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:b1:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406524, 'reachable_time': 20494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274977, 'error': None, 'target': 'ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.712 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdde58d-0210-4273-8b6f-5fb0d6ce9f2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.728 2 DEBUG nova.compute.manager [req-8986379b-96f6-477c-a08a-d7687705ea7e req-c9a69666-3c15-4f9f-b065-e21cc0a4ab80 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Received event network-vif-plugged-a8669959-69cb-4e7c-b708-25e90497b585 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.728 2 DEBUG oslo_concurrency.lockutils [req-8986379b-96f6-477c-a08a-d7687705ea7e req-c9a69666-3c15-4f9f-b065-e21cc0a4ab80 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.729 2 DEBUG oslo_concurrency.lockutils [req-8986379b-96f6-477c-a08a-d7687705ea7e req-c9a69666-3c15-4f9f-b065-e21cc0a4ab80 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.729 2 DEBUG oslo_concurrency.lockutils [req-8986379b-96f6-477c-a08a-d7687705ea7e req-c9a69666-3c15-4f9f-b065-e21cc0a4ab80 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.730 2 DEBUG nova.compute.manager [req-8986379b-96f6-477c-a08a-d7687705ea7e req-c9a69666-3c15-4f9f-b065-e21cc0a4ab80 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Processing event network-vif-plugged-a8669959-69cb-4e7c-b708-25e90497b585 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.812 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[30809a1b-f961-465b-af9b-90d6b66a0baa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.814 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea784d9f-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.815 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.815 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea784d9f-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:57 np0005480824 kernel: tapea784d9f-50: entered promiscuous mode
Oct 10 23:49:57 np0005480824 NetworkManager[44969]: <info>  [1760154597.8195] manager: (tapea784d9f-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.825 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapea784d9f-50, col_values=(('external_ids', {'iface-id': 'd4ae273e-67ce-457d-b09c-bdc58cb85b9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:49:57 np0005480824 ovn_controller[152667]: 2025-10-11T03:49:57Z|00081|binding|INFO|Releasing lport d4ae273e-67ce-457d-b09c-bdc58cb85b9a from this chassis (sb_readonly=0)
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.830 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ea784d9f-5fea-4b2f-8a0a-4232f32d0fff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ea784d9f-5fea-4b2f-8a0a-4232f32d0fff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.831 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4118c904-d5b9-4a13-86a3-5a8614e76078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.833 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/ea784d9f-5fea-4b2f-8a0a-4232f32d0fff.pid.haproxy
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID ea784d9f-5fea-4b2f-8a0a-4232f32d0fff
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:49:57 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:49:57.834 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'env', 'PROCESS_TAG=haproxy-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ea784d9f-5fea-4b2f-8a0a-4232f32d0fff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:49:57 np0005480824 nova_compute[260089]: 2025-10-11 03:49:57.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:49:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:49:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:49:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:49:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:49:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:49:58 np0005480824 podman[274987]: 2025-10-11 03:49:58.048743289 +0000 UTC m=+0.102602020 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:49:58 np0005480824 podman[275077]: 2025-10-11 03:49:58.27198027 +0000 UTC m=+0.055789031 container create 6f91355e0af036f70431a16a047497a73a5d7f41ecb4fde2f7d533eb5221298b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:49:58 np0005480824 systemd[1]: Started libpod-conmon-6f91355e0af036f70431a16a047497a73a5d7f41ecb4fde2f7d533eb5221298b.scope.
Oct 10 23:49:58 np0005480824 podman[275077]: 2025-10-11 03:49:58.243786828 +0000 UTC m=+0.027595609 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:49:58 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:49:58 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6e727cbdc95d1bd4701ae22a951ead4506e73a01a132a47fc1b6b53095d069/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:49:58 np0005480824 podman[275077]: 2025-10-11 03:49:58.411148418 +0000 UTC m=+0.194957209 container init 6f91355e0af036f70431a16a047497a73a5d7f41ecb4fde2f7d533eb5221298b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 10 23:49:58 np0005480824 podman[275077]: 2025-10-11 03:49:58.421856788 +0000 UTC m=+0.205665549 container start 6f91355e0af036f70431a16a047497a73a5d7f41ecb4fde2f7d533eb5221298b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 10 23:49:58 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[275092]: [NOTICE]   (275096) : New worker (275098) forked
Oct 10 23:49:58 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[275092]: [NOTICE]   (275096) : Loading success.
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:49:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.3 MiB/s wr, 99 op/s
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.665 2 DEBUG nova.compute.manager [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.667 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154598.6648395, ade49b15-ded3-459c-b92d-e98380bca4a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.667 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] VM Started (Lifecycle Event)#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.671 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.678 2 INFO nova.virt.libvirt.driver [-] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Instance spawned successfully.#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.678 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.706 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.714 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.716 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.717 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.717 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.718 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.718 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.718 2 DEBUG nova.virt.libvirt.driver [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.758 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.758 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154598.6665394, ade49b15-ded3-459c-b92d-e98380bca4a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.758 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.796 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.805 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154598.6703608, ade49b15-ded3-459c-b92d-e98380bca4a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.806 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.811 2 INFO nova.compute.manager [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Took 6.76 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.811 2 DEBUG nova.compute.manager [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.825 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.830 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.855 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.877 2 INFO nova.compute.manager [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Took 7.68 seconds to build instance.#033[00m
Oct 10 23:49:58 np0005480824 nova_compute[260089]: 2025-10-11 03:49:58.893 2 DEBUG oslo_concurrency.lockutils [None req-e2f76f0b-ccb2-4846-9ca6-fa2c5c0a53d6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Oct 10 23:49:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Oct 10 23:49:58 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Oct 10 23:49:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:49:59 np0005480824 nova_compute[260089]: 2025-10-11 03:49:59.786 2 DEBUG nova.compute.manager [req-1180d7d2-745a-4dc6-b355-67b9e0bfedd1 req-7a0dfbb0-780a-4281-9871-c875ccaa6f6d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Received event network-vif-plugged-a8669959-69cb-4e7c-b708-25e90497b585 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:49:59 np0005480824 nova_compute[260089]: 2025-10-11 03:49:59.787 2 DEBUG oslo_concurrency.lockutils [req-1180d7d2-745a-4dc6-b355-67b9e0bfedd1 req-7a0dfbb0-780a-4281-9871-c875ccaa6f6d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:49:59 np0005480824 nova_compute[260089]: 2025-10-11 03:49:59.787 2 DEBUG oslo_concurrency.lockutils [req-1180d7d2-745a-4dc6-b355-67b9e0bfedd1 req-7a0dfbb0-780a-4281-9871-c875ccaa6f6d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:49:59 np0005480824 nova_compute[260089]: 2025-10-11 03:49:59.788 2 DEBUG oslo_concurrency.lockutils [req-1180d7d2-745a-4dc6-b355-67b9e0bfedd1 req-7a0dfbb0-780a-4281-9871-c875ccaa6f6d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:49:59 np0005480824 nova_compute[260089]: 2025-10-11 03:49:59.788 2 DEBUG nova.compute.manager [req-1180d7d2-745a-4dc6-b355-67b9e0bfedd1 req-7a0dfbb0-780a-4281-9871-c875ccaa6f6d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] No waiting events found dispatching network-vif-plugged-a8669959-69cb-4e7c-b708-25e90497b585 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:49:59 np0005480824 nova_compute[260089]: 2025-10-11 03:49:59.789 2 WARNING nova.compute.manager [req-1180d7d2-745a-4dc6-b355-67b9e0bfedd1 req-7a0dfbb0-780a-4281-9871-c875ccaa6f6d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Received unexpected event network-vif-plugged-a8669959-69cb-4e7c-b708-25e90497b585 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:49:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Oct 10 23:49:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Oct 10 23:49:59 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Oct 10 23:50:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.3 MiB/s wr, 80 op/s
Oct 10 23:50:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Oct 10 23:50:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Oct 10 23:50:00 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Oct 10 23:50:01 np0005480824 nova_compute[260089]: 2025-10-11 03:50:01.007 2 DEBUG nova.compute.manager [req-e6f7bf2e-bf32-42cb-9d9d-cabe9d707461 req-0d973ba2-6546-4eed-b153-1556b628c6a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Received event network-changed-a8669959-69cb-4e7c-b708-25e90497b585 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:50:01 np0005480824 nova_compute[260089]: 2025-10-11 03:50:01.007 2 DEBUG nova.compute.manager [req-e6f7bf2e-bf32-42cb-9d9d-cabe9d707461 req-0d973ba2-6546-4eed-b153-1556b628c6a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Refreshing instance network info cache due to event network-changed-a8669959-69cb-4e7c-b708-25e90497b585. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:50:01 np0005480824 nova_compute[260089]: 2025-10-11 03:50:01.007 2 DEBUG oslo_concurrency.lockutils [req-e6f7bf2e-bf32-42cb-9d9d-cabe9d707461 req-0d973ba2-6546-4eed-b153-1556b628c6a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-ade49b15-ded3-459c-b92d-e98380bca4a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:50:01 np0005480824 nova_compute[260089]: 2025-10-11 03:50:01.008 2 DEBUG oslo_concurrency.lockutils [req-e6f7bf2e-bf32-42cb-9d9d-cabe9d707461 req-0d973ba2-6546-4eed-b153-1556b628c6a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-ade49b15-ded3-459c-b92d-e98380bca4a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:50:01 np0005480824 nova_compute[260089]: 2025-10-11 03:50:01.008 2 DEBUG nova.network.neutron [req-e6f7bf2e-bf32-42cb-9d9d-cabe9d707461 req-0d973ba2-6546-4eed-b153-1556b628c6a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Refreshing network info cache for port a8669959-69cb-4e7c-b708-25e90497b585 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:50:01 np0005480824 nova_compute[260089]: 2025-10-11 03:50:01.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Oct 10 23:50:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Oct 10 23:50:01 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Oct 10 23:50:02 np0005480824 nova_compute[260089]: 2025-10-11 03:50:02.295 2 DEBUG nova.network.neutron [req-e6f7bf2e-bf32-42cb-9d9d-cabe9d707461 req-0d973ba2-6546-4eed-b153-1556b628c6a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Updated VIF entry in instance network info cache for port a8669959-69cb-4e7c-b708-25e90497b585. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:50:02 np0005480824 nova_compute[260089]: 2025-10-11 03:50:02.296 2 DEBUG nova.network.neutron [req-e6f7bf2e-bf32-42cb-9d9d-cabe9d707461 req-0d973ba2-6546-4eed-b153-1556b628c6a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Updating instance_info_cache with network_info: [{"id": "a8669959-69cb-4e7c-b708-25e90497b585", "address": "fa:16:3e:23:e8:5f", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8669959-69", "ovs_interfaceid": "a8669959-69cb-4e7c-b708-25e90497b585", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:50:02 np0005480824 nova_compute[260089]: 2025-10-11 03:50:02.314 2 DEBUG oslo_concurrency.lockutils [req-e6f7bf2e-bf32-42cb-9d9d-cabe9d707461 req-0d973ba2-6546-4eed-b153-1556b628c6a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-ade49b15-ded3-459c-b92d-e98380bca4a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:50:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1888153485' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1888153485' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 47 KiB/s wr, 226 op/s
Oct 10 23:50:03 np0005480824 nova_compute[260089]: 2025-10-11 03:50:03.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:50:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 33 KiB/s wr, 237 op/s
Oct 10 23:50:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Oct 10 23:50:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Oct 10 23:50:05 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Oct 10 23:50:05 np0005480824 podman[275107]: 2025-10-11 03:50:05.088631933 +0000 UTC m=+0.133070955 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 23:50:06 np0005480824 nova_compute[260089]: 2025-10-11 03:50:06.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 32 KiB/s wr, 225 op/s
Oct 10 23:50:06 np0005480824 nova_compute[260089]: 2025-10-11 03:50:06.989 2 DEBUG oslo_concurrency.lockutils [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "266aeb27-7f54-4255-9018-0b6092629b80" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:06 np0005480824 nova_compute[260089]: 2025-10-11 03:50:06.990 2 DEBUG oslo_concurrency.lockutils [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.007 2 DEBUG nova.objects.instance [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lazy-loading 'flavor' on Instance uuid 266aeb27-7f54-4255-9018-0b6092629b80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.025 2 INFO nova.virt.libvirt.driver [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Ignoring supplied device name: /dev/vdb#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.049 2 DEBUG oslo_concurrency.lockutils [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.271 2 DEBUG oslo_concurrency.lockutils [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "266aeb27-7f54-4255-9018-0b6092629b80" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.274 2 DEBUG oslo_concurrency.lockutils [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.275 2 INFO nova.compute.manager [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Attaching volume 684c48f3-e9e7-4919-b6d1-5fed84a6f167 to /dev/vdb#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.517 2 DEBUG os_brick.utils [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.520 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.541 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.541 676 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c93d53-fcee-4d47-b7b4-4417c9b31eee]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.545 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.560 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.561 676 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec05ba7-832e-40d4-895a-a3cc8edc93c6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.563 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.580 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.581 676 DEBUG oslo.privsep.daemon [-] privsep: reply[00322c36-93d1-4cf7-8ef0-1cd26bafd132]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.583 676 DEBUG oslo.privsep.daemon [-] privsep: reply[a1960449-32be-43dc-a1d8-13ac74212360]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.584 2 DEBUG oslo_concurrency.processutils [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.626 2 DEBUG oslo_concurrency.processutils [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "nvme version" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.633 2 DEBUG os_brick.initiator.connectors.lightos [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.633 2 DEBUG os_brick.initiator.connectors.lightos [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.634 2 DEBUG os_brick.initiator.connectors.lightos [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.635 2 DEBUG os_brick.utils [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] <== get_connector_properties: return (116ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:50:07 np0005480824 nova_compute[260089]: 2025-10-11 03:50:07.636 2 DEBUG nova.virt.block_device [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Updating existing volume attachment record: 2f9d6abb-a89e-4bd3-84ca-36cdf8dc813a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:50:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:50:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2577791857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:50:08 np0005480824 nova_compute[260089]: 2025-10-11 03:50:08.338 2 DEBUG nova.objects.instance [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lazy-loading 'flavor' on Instance uuid 266aeb27-7f54-4255-9018-0b6092629b80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:50:08 np0005480824 nova_compute[260089]: 2025-10-11 03:50:08.361 2 DEBUG nova.virt.libvirt.driver [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Attempting to attach volume 684c48f3-e9e7-4919-b6d1-5fed84a6f167 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct 10 23:50:08 np0005480824 nova_compute[260089]: 2025-10-11 03:50:08.365 2 DEBUG nova.virt.libvirt.guest [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] attach device xml: <disk type="network" device="disk">
Oct 10 23:50:08 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:50:08 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-684c48f3-e9e7-4919-b6d1-5fed84a6f167">
Oct 10 23:50:08 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:50:08 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:50:08 np0005480824 nova_compute[260089]:  <auth username="openstack">
Oct 10 23:50:08 np0005480824 nova_compute[260089]:    <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:50:08 np0005480824 nova_compute[260089]:  </auth>
Oct 10 23:50:08 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:50:08 np0005480824 nova_compute[260089]:  <serial>684c48f3-e9e7-4919-b6d1-5fed84a6f167</serial>
Oct 10 23:50:08 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:50:08 np0005480824 nova_compute[260089]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 10 23:50:08 np0005480824 nova_compute[260089]: 2025-10-11 03:50:08.479 2 DEBUG nova.virt.libvirt.driver [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:50:08 np0005480824 nova_compute[260089]: 2025-10-11 03:50:08.480 2 DEBUG nova.virt.libvirt.driver [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:50:08 np0005480824 nova_compute[260089]: 2025-10-11 03:50:08.480 2 DEBUG nova.virt.libvirt.driver [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:50:08 np0005480824 nova_compute[260089]: 2025-10-11 03:50:08.481 2 DEBUG nova.virt.libvirt.driver [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] No VIF found with MAC fa:16:3e:3f:63:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:50:08 np0005480824 nova_compute[260089]: 2025-10-11 03:50:08.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 29 KiB/s wr, 212 op/s
Oct 10 23:50:08 np0005480824 nova_compute[260089]: 2025-10-11 03:50:08.677 2 DEBUG oslo_concurrency.lockutils [None req-8686a7d3-9069-4511-8504-eba335ecf988 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Oct 10 23:50:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Oct 10 23:50:09 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Oct 10 23:50:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1853556905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1853556905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:50:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Oct 10 23:50:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Oct 10 23:50:09 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Oct 10 23:50:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:50:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3890858794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:50:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:10.491 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:10.492 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:10.493 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2328295125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2328295125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.7 KiB/s wr, 46 op/s
Oct 10 23:50:10 np0005480824 ovn_controller[152667]: 2025-10-11T03:50:10Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:e8:5f 10.100.0.14
Oct 10 23:50:10 np0005480824 ovn_controller[152667]: 2025-10-11T03:50:10Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:e8:5f 10.100.0.14
Oct 10 23:50:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Oct 10 23:50:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Oct 10 23:50:11 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Oct 10 23:50:11 np0005480824 nova_compute[260089]: 2025-10-11 03:50:11.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Oct 10 23:50:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Oct 10 23:50:12 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Oct 10 23:50:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 191 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 771 KiB/s rd, 4.6 MiB/s wr, 293 op/s
Oct 10 23:50:13 np0005480824 nova_compute[260089]: 2025-10-11 03:50:13.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Oct 10 23:50:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Oct 10 23:50:14 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Oct 10 23:50:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:50:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 200 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 909 KiB/s rd, 5.0 MiB/s wr, 325 op/s
Oct 10 23:50:15 np0005480824 nova_compute[260089]: 2025-10-11 03:50:15.035 2 DEBUG oslo_concurrency.lockutils [None req-3a079d55-8d64-4afc-b10d-1f5973258d6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "266aeb27-7f54-4255-9018-0b6092629b80" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:15 np0005480824 nova_compute[260089]: 2025-10-11 03:50:15.036 2 DEBUG oslo_concurrency.lockutils [None req-3a079d55-8d64-4afc-b10d-1f5973258d6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:15 np0005480824 nova_compute[260089]: 2025-10-11 03:50:15.054 2 INFO nova.compute.manager [None req-3a079d55-8d64-4afc-b10d-1f5973258d6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Detaching volume 684c48f3-e9e7-4919-b6d1-5fed84a6f167#033[00m
Oct 10 23:50:15 np0005480824 nova_compute[260089]: 2025-10-11 03:50:15.176 2 INFO nova.virt.block_device [None req-3a079d55-8d64-4afc-b10d-1f5973258d6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Attempting to driver detach volume 684c48f3-e9e7-4919-b6d1-5fed84a6f167 from mountpoint /dev/vdb#033[00m
Oct 10 23:50:15 np0005480824 nova_compute[260089]: 2025-10-11 03:50:15.187 2 DEBUG nova.virt.libvirt.driver [None req-3a079d55-8d64-4afc-b10d-1f5973258d6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Attempting to detach device vdb from instance 266aeb27-7f54-4255-9018-0b6092629b80 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 10 23:50:15 np0005480824 nova_compute[260089]: 2025-10-11 03:50:15.188 2 DEBUG nova.virt.libvirt.guest [None req-3a079d55-8d64-4afc-b10d-1f5973258d6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:50:15 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:50:15 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-684c48f3-e9e7-4919-b6d1-5fed84a6f167">
Oct 10 23:50:15 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:50:15 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:50:15 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:50:15 np0005480824 nova_compute[260089]:  <serial>684c48f3-e9e7-4919-b6d1-5fed84a6f167</serial>
Oct 10 23:50:15 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:50:15 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:50:15 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:50:15 np0005480824 nova_compute[260089]: 2025-10-11 03:50:15.195 2 INFO nova.virt.libvirt.driver [None req-3a079d55-8d64-4afc-b10d-1f5973258d6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Successfully detached device vdb from instance 266aeb27-7f54-4255-9018-0b6092629b80 from the persistent domain config.#033[00m
Oct 10 23:50:15 np0005480824 nova_compute[260089]: 2025-10-11 03:50:15.196 2 DEBUG nova.virt.libvirt.driver [None req-3a079d55-8d64-4afc-b10d-1f5973258d6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 266aeb27-7f54-4255-9018-0b6092629b80 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 10 23:50:15 np0005480824 nova_compute[260089]: 2025-10-11 03:50:15.196 2 DEBUG nova.virt.libvirt.guest [None req-3a079d55-8d64-4afc-b10d-1f5973258d6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:50:15 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:50:15 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-684c48f3-e9e7-4919-b6d1-5fed84a6f167">
Oct 10 23:50:15 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:50:15 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:50:15 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:50:15 np0005480824 nova_compute[260089]:  <serial>684c48f3-e9e7-4919-b6d1-5fed84a6f167</serial>
Oct 10 23:50:15 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:50:15 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:50:15 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:50:15 np0005480824 nova_compute[260089]: 2025-10-11 03:50:15.306 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Received event <DeviceRemovedEvent: 1760154615.305918, 266aeb27-7f54-4255-9018-0b6092629b80 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 10 23:50:15 np0005480824 nova_compute[260089]: 2025-10-11 03:50:15.307 2 DEBUG nova.virt.libvirt.driver [None req-3a079d55-8d64-4afc-b10d-1f5973258d6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 266aeb27-7f54-4255-9018-0b6092629b80 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 10 23:50:15 np0005480824 nova_compute[260089]: 2025-10-11 03:50:15.309 2 INFO nova.virt.libvirt.driver [None req-3a079d55-8d64-4afc-b10d-1f5973258d6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Successfully detached device vdb from instance 266aeb27-7f54-4255-9018-0b6092629b80 from the live domain config.#033[00m
Oct 10 23:50:15 np0005480824 nova_compute[260089]: 2025-10-11 03:50:15.477 2 DEBUG nova.objects.instance [None req-3a079d55-8d64-4afc-b10d-1f5973258d6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lazy-loading 'flavor' on Instance uuid 266aeb27-7f54-4255-9018-0b6092629b80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:50:15 np0005480824 nova_compute[260089]: 2025-10-11 03:50:15.511 2 DEBUG oslo_concurrency.lockutils [None req-3a079d55-8d64-4afc-b10d-1f5973258d6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.335 2 DEBUG oslo_concurrency.lockutils [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "266aeb27-7f54-4255-9018-0b6092629b80" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.336 2 DEBUG oslo_concurrency.lockutils [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.336 2 DEBUG oslo_concurrency.lockutils [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "266aeb27-7f54-4255-9018-0b6092629b80-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.336 2 DEBUG oslo_concurrency.lockutils [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.336 2 DEBUG oslo_concurrency.lockutils [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.337 2 INFO nova.compute.manager [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Terminating instance#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.338 2 DEBUG nova.compute.manager [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:16 np0005480824 kernel: tapc6b37039-6a (unregistering): left promiscuous mode
Oct 10 23:50:16 np0005480824 NetworkManager[44969]: <info>  [1760154616.4090] device (tapc6b37039-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:50:16 np0005480824 ovn_controller[152667]: 2025-10-11T03:50:16Z|00082|binding|INFO|Releasing lport c6b37039-6a92-4786-8d2b-febe3f3e7716 from this chassis (sb_readonly=0)
Oct 10 23:50:16 np0005480824 ovn_controller[152667]: 2025-10-11T03:50:16Z|00083|binding|INFO|Setting lport c6b37039-6a92-4786-8d2b-febe3f3e7716 down in Southbound
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:16 np0005480824 ovn_controller[152667]: 2025-10-11T03:50:16Z|00084|binding|INFO|Removing iface tapc6b37039-6a ovn-installed in OVS
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:16 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:16.438 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:63:b3 10.100.0.11'], port_security=['fa:16:3e:3f:63:b3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '266aeb27-7f54-4255-9018-0b6092629b80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '69ce475b5af645b7b89607f7ecc196d5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f6b945b3-d5d1-471a-9062-b88150248abb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8dd8adb-2052-443b-8fa5-01e320e55d02, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=c6b37039-6a92-4786-8d2b-febe3f3e7716) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:50:16 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:16.441 162245 INFO neutron.agent.ovn.metadata.agent [-] Port c6b37039-6a92-4786-8d2b-febe3f3e7716 in datapath 53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0 unbound from our chassis#033[00m
Oct 10 23:50:16 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:16.444 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:50:16 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:16.446 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[40d260a3-e9bf-4615-b0a2-2dd3b69ad104]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:16 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:16.447 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0 namespace which is not needed anymore#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:16 np0005480824 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 10 23:50:16 np0005480824 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 15.359s CPU time.
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.516 2 DEBUG oslo_concurrency.lockutils [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "ade49b15-ded3-459c-b92d-e98380bca4a4" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.517 2 DEBUG oslo_concurrency.lockutils [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:16 np0005480824 systemd-machined[215071]: Machine qemu-6-instance-00000006 terminated.
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.536 2 DEBUG nova.objects.instance [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lazy-loading 'flavor' on Instance uuid ade49b15-ded3-459c-b92d-e98380bca4a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.554 2 INFO nova.virt.libvirt.driver [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Ignoring supplied device name: /dev/vdb#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.568 2 DEBUG oslo_concurrency.lockutils [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.579 2 INFO nova.virt.libvirt.driver [-] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Instance destroyed successfully.#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.580 2 DEBUG nova.objects.instance [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lazy-loading 'resources' on Instance uuid 266aeb27-7f54-4255-9018-0b6092629b80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.592 2 DEBUG nova.virt.libvirt.vif [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:49:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-280413383',display_name='tempest-VolumesBackupsTest-instance-280413383',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-280413383',id=6,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhgy0EIWp65igZG9TVEs9YGw1G6ZyMcaN2BBBudrmzwVtnKHdxyFsbmVQjmSOPEVBwShmZWx6lQroFGYQCanPMH+jPWV8YBMG0D0qW4UPxkpvOP9Msp1Asd/gfDFNTmzw==',key_name='tempest-keypair-1381245901',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:49:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='69ce475b5af645b7b89607f7ecc196d5',ramdisk_id='',reservation_id='r-or7yx0nk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1570005285',owner_user_name='tempest-VolumesBackupsTest-1570005285-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:49:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0dd21dcc2e2e4870bd3a6eb5146bc451',uuid=266aeb27-7f54-4255-9018-0b6092629b80,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "address": "fa:16:3e:3f:63:b3", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6b37039-6a", "ovs_interfaceid": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.592 2 DEBUG nova.network.os_vif_util [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Converting VIF {"id": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "address": "fa:16:3e:3f:63:b3", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6b37039-6a", "ovs_interfaceid": "c6b37039-6a92-4786-8d2b-febe3f3e7716", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.593 2 DEBUG nova.network.os_vif_util [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:63:b3,bridge_name='br-int',has_traffic_filtering=True,id=c6b37039-6a92-4786-8d2b-febe3f3e7716,network=Network(53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6b37039-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.593 2 DEBUG os_vif [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:63:b3,bridge_name='br-int',has_traffic_filtering=True,id=c6b37039-6a92-4786-8d2b-febe3f3e7716,network=Network(53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6b37039-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.596 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6b37039-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.603 2 INFO os_vif [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:63:b3,bridge_name='br-int',has_traffic_filtering=True,id=c6b37039-6a92-4786-8d2b-febe3f3e7716,network=Network(53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6b37039-6a')#033[00m
Oct 10 23:50:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:50:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096236514' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:50:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 200 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 770 KiB/s rd, 4.3 MiB/s wr, 275 op/s
Oct 10 23:50:16 np0005480824 neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0[274289]: [NOTICE]   (274297) : haproxy version is 2.8.14-c23fe91
Oct 10 23:50:16 np0005480824 neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0[274289]: [NOTICE]   (274297) : path to executable is /usr/sbin/haproxy
Oct 10 23:50:16 np0005480824 neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0[274289]: [WARNING]  (274297) : Exiting Master process...
Oct 10 23:50:16 np0005480824 neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0[274289]: [ALERT]    (274297) : Current worker (274301) exited with code 143 (Terminated)
Oct 10 23:50:16 np0005480824 neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0[274289]: [WARNING]  (274297) : All workers exited. Exiting... (0)
Oct 10 23:50:16 np0005480824 systemd[1]: libpod-babbf34917722bf9c57fdaedd908f1a9c22b9251eb32c834e64fe492b310f708.scope: Deactivated successfully.
Oct 10 23:50:16 np0005480824 podman[275181]: 2025-10-11 03:50:16.64175101 +0000 UTC m=+0.064563597 container died babbf34917722bf9c57fdaedd908f1a9c22b9251eb32c834e64fe492b310f708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:50:16 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-babbf34917722bf9c57fdaedd908f1a9c22b9251eb32c834e64fe492b310f708-userdata-shm.mount: Deactivated successfully.
Oct 10 23:50:16 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c404156d618cba5d99ae7e5a5e52f23269730a7c1567e786ddf7484fbab665d6-merged.mount: Deactivated successfully.
Oct 10 23:50:16 np0005480824 podman[275181]: 2025-10-11 03:50:16.697705424 +0000 UTC m=+0.120518001 container cleanup babbf34917722bf9c57fdaedd908f1a9c22b9251eb32c834e64fe492b310f708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 10 23:50:16 np0005480824 systemd[1]: libpod-conmon-babbf34917722bf9c57fdaedd908f1a9c22b9251eb32c834e64fe492b310f708.scope: Deactivated successfully.
Oct 10 23:50:16 np0005480824 podman[275238]: 2025-10-11 03:50:16.784339878 +0000 UTC m=+0.055142195 container remove babbf34917722bf9c57fdaedd908f1a9c22b9251eb32c834e64fe492b310f708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 10 23:50:16 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:16.792 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[037f8710-e1d4-476d-907e-9d4d31b0ff72]: (4, ('Sat Oct 11 03:50:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0 (babbf34917722bf9c57fdaedd908f1a9c22b9251eb32c834e64fe492b310f708)\nbabbf34917722bf9c57fdaedd908f1a9c22b9251eb32c834e64fe492b310f708\nSat Oct 11 03:50:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0 (babbf34917722bf9c57fdaedd908f1a9c22b9251eb32c834e64fe492b310f708)\nbabbf34917722bf9c57fdaedd908f1a9c22b9251eb32c834e64fe492b310f708\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:16 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:16.795 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e5614406-ecae-40b8-9911-edfa4ef1ec8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:16 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:16.796 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53e5ffdf-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:50:16 np0005480824 kernel: tap53e5ffdf-10: left promiscuous mode
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:16 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:16.824 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[49f485a6-e4b5-4f5b-b1e2-f104dfe3a0a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:16 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:16.852 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d112c16d-b048-4f61-87fc-cd3f7b724bc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:16 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:16.853 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2edee318-192f-449b-a676-9d3082725ee2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:16 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:16.872 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7bda02-e73e-49b4-b49a-59fc2d0648a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403798, 'reachable_time': 25293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275253, 'error': None, 'target': 'ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:16 np0005480824 systemd[1]: run-netns-ovnmeta\x2d53e5ffdf\x2d1a4b\x2d4db5\x2db1e7\x2da9b6a7b01fd0.mount: Deactivated successfully.
Oct 10 23:50:16 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:16.876 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:50:16 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:16.877 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[13833201-2b63-4e00-84d6-df160ebec668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.980 2 DEBUG oslo_concurrency.lockutils [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "ade49b15-ded3-459c-b92d-e98380bca4a4" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.982 2 DEBUG oslo_concurrency.lockutils [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:16 np0005480824 nova_compute[260089]: 2025-10-11 03:50:16.983 2 INFO nova.compute.manager [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Attaching volume 6000e96e-8ce4-4186-92df-f91f8f06d0e7 to /dev/vdb#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.105 2 INFO nova.virt.libvirt.driver [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Deleting instance files /var/lib/nova/instances/266aeb27-7f54-4255-9018-0b6092629b80_del#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.107 2 INFO nova.virt.libvirt.driver [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Deletion of /var/lib/nova/instances/266aeb27-7f54-4255-9018-0b6092629b80_del complete#033[00m
Oct 10 23:50:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Oct 10 23:50:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Oct 10 23:50:17 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.156 2 DEBUG nova.compute.manager [req-4a63119e-5fb8-4c9a-9eb4-7b409250daf2 req-39545c54-f17a-489d-9138-1db299e53cc8 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Received event network-vif-unplugged-c6b37039-6a92-4786-8d2b-febe3f3e7716 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.157 2 DEBUG oslo_concurrency.lockutils [req-4a63119e-5fb8-4c9a-9eb4-7b409250daf2 req-39545c54-f17a-489d-9138-1db299e53cc8 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "266aeb27-7f54-4255-9018-0b6092629b80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.157 2 DEBUG oslo_concurrency.lockutils [req-4a63119e-5fb8-4c9a-9eb4-7b409250daf2 req-39545c54-f17a-489d-9138-1db299e53cc8 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.160 2 DEBUG oslo_concurrency.lockutils [req-4a63119e-5fb8-4c9a-9eb4-7b409250daf2 req-39545c54-f17a-489d-9138-1db299e53cc8 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.161 2 DEBUG nova.compute.manager [req-4a63119e-5fb8-4c9a-9eb4-7b409250daf2 req-39545c54-f17a-489d-9138-1db299e53cc8 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] No waiting events found dispatching network-vif-unplugged-c6b37039-6a92-4786-8d2b-febe3f3e7716 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.161 2 DEBUG nova.compute.manager [req-4a63119e-5fb8-4c9a-9eb4-7b409250daf2 req-39545c54-f17a-489d-9138-1db299e53cc8 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Received event network-vif-unplugged-c6b37039-6a92-4786-8d2b-febe3f3e7716 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.186 2 INFO nova.compute.manager [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.187 2 DEBUG oslo.service.loopingcall [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.187 2 DEBUG nova.compute.manager [-] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.187 2 DEBUG nova.network.neutron [-] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.209 2 DEBUG os_brick.utils [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.211 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.236 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.237 676 DEBUG oslo.privsep.daemon [-] privsep: reply[35ad689c-ee43-47d0-ac81-4bb19be22aa6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.239 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.255 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.256 676 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb6802b-6d6f-4e52-b5f7-7b5689cd0e0a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.258 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.269 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.270 676 DEBUG oslo.privsep.daemon [-] privsep: reply[b37f695c-d535-4efe-8d70-83302d78fd7a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.271 676 DEBUG oslo.privsep.daemon [-] privsep: reply[9957159a-7fe3-4579-9d37-2efda07eecaf]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.272 2 DEBUG oslo_concurrency.processutils [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.312 2 DEBUG oslo_concurrency.processutils [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.318 2 DEBUG os_brick.initiator.connectors.lightos [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.318 2 DEBUG os_brick.initiator.connectors.lightos [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.319 2 DEBUG os_brick.initiator.connectors.lightos [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.320 2 DEBUG os_brick.utils [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] <== get_connector_properties: return (109ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.320 2 DEBUG nova.virt.block_device [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Updating existing volume attachment record: c96405ca-a46d-4de1-89dc-6a55cd3d86de _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.466 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.493 2 WARNING nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.493 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Triggering sync for uuid 266aeb27-7f54-4255-9018-0b6092629b80 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.494 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Triggering sync for uuid ade49b15-ded3-459c-b92d-e98380bca4a4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.494 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "266aeb27-7f54-4255-9018-0b6092629b80" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.495 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "ade49b15-ded3-459c-b92d-e98380bca4a4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:50:17 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1169552384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.953 2 DEBUG nova.network.neutron [-] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:50:17 np0005480824 nova_compute[260089]: 2025-10-11 03:50:17.972 2 INFO nova.compute.manager [-] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Took 0.78 seconds to deallocate network for instance.#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.022 2 DEBUG oslo_concurrency.lockutils [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.023 2 DEBUG oslo_concurrency.lockutils [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.060 2 DEBUG nova.objects.instance [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lazy-loading 'flavor' on Instance uuid ade49b15-ded3-459c-b92d-e98380bca4a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.087 2 DEBUG nova.virt.libvirt.driver [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Attempting to attach volume 6000e96e-8ce4-4186-92df-f91f8f06d0e7 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.091 2 DEBUG nova.virt.libvirt.guest [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] attach device xml: <disk type="network" device="disk">
Oct 10 23:50:18 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:50:18 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-6000e96e-8ce4-4186-92df-f91f8f06d0e7">
Oct 10 23:50:18 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:50:18 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:50:18 np0005480824 nova_compute[260089]:  <auth username="openstack">
Oct 10 23:50:18 np0005480824 nova_compute[260089]:    <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:50:18 np0005480824 nova_compute[260089]:  </auth>
Oct 10 23:50:18 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:50:18 np0005480824 nova_compute[260089]:  <serial>6000e96e-8ce4-4186-92df-f91f8f06d0e7</serial>
Oct 10 23:50:18 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:50:18 np0005480824 nova_compute[260089]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.112 2 DEBUG oslo_concurrency.processutils [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:50:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Oct 10 23:50:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Oct 10 23:50:18 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.264 2 DEBUG nova.virt.libvirt.driver [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.265 2 DEBUG nova.virt.libvirt.driver [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.266 2 DEBUG nova.virt.libvirt.driver [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.266 2 DEBUG nova.virt.libvirt.driver [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] No VIF found with MAC fa:16:3e:23:e8:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.463 2 DEBUG oslo_concurrency.lockutils [None req-1345f5c4-b5f9-467e-9766-6f7458c24a28 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.464 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.510 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:50:18 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/164280479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.605 2 DEBUG oslo_concurrency.processutils [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.611 2 DEBUG nova.compute.provider_tree [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:50:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 1.2 MiB/s wr, 180 op/s
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.629 2 DEBUG nova.scheduler.client.report [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.648 2 DEBUG oslo_concurrency.lockutils [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.667 2 INFO nova.scheduler.client.report [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Deleted allocations for instance 266aeb27-7f54-4255-9018-0b6092629b80#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.724 2 DEBUG oslo_concurrency.lockutils [None req-186ee2cf-b44e-4fe9-b647-dd7c6a37ef6a 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.726 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "266aeb27-7f54-4255-9018-0b6092629b80" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.726 2 INFO nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Oct 10 23:50:18 np0005480824 nova_compute[260089]: 2025-10-11 03:50:18.726 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "266aeb27-7f54-4255-9018-0b6092629b80" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:19 np0005480824 nova_compute[260089]: 2025-10-11 03:50:19.247 2 DEBUG nova.compute.manager [req-a8b044a0-4b8c-4788-a60b-092134a688ae req-1521845e-bea1-4bf0-bc72-dbefe59c67f0 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Received event network-vif-plugged-c6b37039-6a92-4786-8d2b-febe3f3e7716 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:50:19 np0005480824 nova_compute[260089]: 2025-10-11 03:50:19.247 2 DEBUG oslo_concurrency.lockutils [req-a8b044a0-4b8c-4788-a60b-092134a688ae req-1521845e-bea1-4bf0-bc72-dbefe59c67f0 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "266aeb27-7f54-4255-9018-0b6092629b80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:19 np0005480824 nova_compute[260089]: 2025-10-11 03:50:19.248 2 DEBUG oslo_concurrency.lockutils [req-a8b044a0-4b8c-4788-a60b-092134a688ae req-1521845e-bea1-4bf0-bc72-dbefe59c67f0 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:19 np0005480824 nova_compute[260089]: 2025-10-11 03:50:19.248 2 DEBUG oslo_concurrency.lockutils [req-a8b044a0-4b8c-4788-a60b-092134a688ae req-1521845e-bea1-4bf0-bc72-dbefe59c67f0 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "266aeb27-7f54-4255-9018-0b6092629b80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:19 np0005480824 nova_compute[260089]: 2025-10-11 03:50:19.248 2 DEBUG nova.compute.manager [req-a8b044a0-4b8c-4788-a60b-092134a688ae req-1521845e-bea1-4bf0-bc72-dbefe59c67f0 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] No waiting events found dispatching network-vif-plugged-c6b37039-6a92-4786-8d2b-febe3f3e7716 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:50:19 np0005480824 nova_compute[260089]: 2025-10-11 03:50:19.248 2 WARNING nova.compute.manager [req-a8b044a0-4b8c-4788-a60b-092134a688ae req-1521845e-bea1-4bf0-bc72-dbefe59c67f0 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Received unexpected event network-vif-plugged-c6b37039-6a92-4786-8d2b-febe3f3e7716 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 23:50:19 np0005480824 nova_compute[260089]: 2025-10-11 03:50:19.248 2 DEBUG nova.compute.manager [req-a8b044a0-4b8c-4788-a60b-092134a688ae req-1521845e-bea1-4bf0-bc72-dbefe59c67f0 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Received event network-vif-deleted-c6b37039-6a92-4786-8d2b-febe3f3e7716 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:50:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:50:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Oct 10 23:50:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Oct 10 23:50:19 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Oct 10 23:50:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Oct 10 23:50:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Oct 10 23:50:20 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Oct 10 23:50:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 55 KiB/s wr, 150 op/s
Oct 10 23:50:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:21 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3738183759' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:21 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3738183759' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Oct 10 23:50:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Oct 10 23:50:21 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Oct 10 23:50:21 np0005480824 nova_compute[260089]: 2025-10-11 03:50:21.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:22 np0005480824 podman[275305]: 2025-10-11 03:50:22.040283668 +0000 UTC m=+0.080890570 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 10 23:50:22 np0005480824 podman[275304]: 2025-10-11 03:50:22.050791974 +0000 UTC m=+0.096788253 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 10 23:50:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Oct 10 23:50:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Oct 10 23:50:22 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Oct 10 23:50:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 2.7 KiB/s wr, 85 op/s
Oct 10 23:50:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/738312303' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/738312303' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:23 np0005480824 nova_compute[260089]: 2025-10-11 03:50:23.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Oct 10 23:50:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Oct 10 23:50:23 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Oct 10 23:50:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:50:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2349397496' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2349397496' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 121 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 9.1 KiB/s wr, 250 op/s
Oct 10 23:50:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Oct 10 23:50:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Oct 10 23:50:25 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Oct 10 23:50:26 np0005480824 nova_compute[260089]: 2025-10-11 03:50:26.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 121 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 7.3 KiB/s wr, 201 op/s
Oct 10 23:50:26 np0005480824 nova_compute[260089]: 2025-10-11 03:50:26.699 2 DEBUG oslo_concurrency.lockutils [None req-078093ea-a1b4-4fb9-8cf3-140eab559c82 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "ade49b15-ded3-459c-b92d-e98380bca4a4" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:26 np0005480824 nova_compute[260089]: 2025-10-11 03:50:26.700 2 DEBUG oslo_concurrency.lockutils [None req-078093ea-a1b4-4fb9-8cf3-140eab559c82 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:26 np0005480824 nova_compute[260089]: 2025-10-11 03:50:26.726 2 INFO nova.compute.manager [None req-078093ea-a1b4-4fb9-8cf3-140eab559c82 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Detaching volume 6000e96e-8ce4-4186-92df-f91f8f06d0e7#033[00m
Oct 10 23:50:26 np0005480824 nova_compute[260089]: 2025-10-11 03:50:26.874 2 DEBUG oslo_concurrency.lockutils [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "ade49b15-ded3-459c-b92d-e98380bca4a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:26 np0005480824 nova_compute[260089]: 2025-10-11 03:50:26.944 2 INFO nova.virt.block_device [None req-078093ea-a1b4-4fb9-8cf3-140eab559c82 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Attempting to driver detach volume 6000e96e-8ce4-4186-92df-f91f8f06d0e7 from mountpoint /dev/vdb#033[00m
Oct 10 23:50:26 np0005480824 nova_compute[260089]: 2025-10-11 03:50:26.958 2 DEBUG nova.virt.libvirt.driver [None req-078093ea-a1b4-4fb9-8cf3-140eab559c82 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Attempting to detach device vdb from instance ade49b15-ded3-459c-b92d-e98380bca4a4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 10 23:50:26 np0005480824 nova_compute[260089]: 2025-10-11 03:50:26.959 2 DEBUG nova.virt.libvirt.guest [None req-078093ea-a1b4-4fb9-8cf3-140eab559c82 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:50:26 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:50:26 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-6000e96e-8ce4-4186-92df-f91f8f06d0e7">
Oct 10 23:50:26 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:50:26 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:50:26 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:50:26 np0005480824 nova_compute[260089]:  <serial>6000e96e-8ce4-4186-92df-f91f8f06d0e7</serial>
Oct 10 23:50:26 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:50:26 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:50:26 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:50:26 np0005480824 nova_compute[260089]: 2025-10-11 03:50:26.972 2 INFO nova.virt.libvirt.driver [None req-078093ea-a1b4-4fb9-8cf3-140eab559c82 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Successfully detached device vdb from instance ade49b15-ded3-459c-b92d-e98380bca4a4 from the persistent domain config.#033[00m
Oct 10 23:50:26 np0005480824 nova_compute[260089]: 2025-10-11 03:50:26.973 2 DEBUG nova.virt.libvirt.driver [None req-078093ea-a1b4-4fb9-8cf3-140eab559c82 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance ade49b15-ded3-459c-b92d-e98380bca4a4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 10 23:50:26 np0005480824 nova_compute[260089]: 2025-10-11 03:50:26.974 2 DEBUG nova.virt.libvirt.guest [None req-078093ea-a1b4-4fb9-8cf3-140eab559c82 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:50:26 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:50:26 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-6000e96e-8ce4-4186-92df-f91f8f06d0e7">
Oct 10 23:50:26 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:50:26 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:50:26 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:50:26 np0005480824 nova_compute[260089]:  <serial>6000e96e-8ce4-4186-92df-f91f8f06d0e7</serial>
Oct 10 23:50:26 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:50:26 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:50:26 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.030 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Received event <DeviceRemovedEvent: 1760154627.0299351, ade49b15-ded3-459c-b92d-e98380bca4a4 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.032 2 DEBUG nova.virt.libvirt.driver [None req-078093ea-a1b4-4fb9-8cf3-140eab559c82 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance ade49b15-ded3-459c-b92d-e98380bca4a4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.034 2 INFO nova.virt.libvirt.driver [None req-078093ea-a1b4-4fb9-8cf3-140eab559c82 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Successfully detached device vdb from instance ade49b15-ded3-459c-b92d-e98380bca4a4 from the live domain config.#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.238 2 DEBUG nova.objects.instance [None req-078093ea-a1b4-4fb9-8cf3-140eab559c82 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lazy-loading 'flavor' on Instance uuid ade49b15-ded3-459c-b92d-e98380bca4a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.276 2 DEBUG oslo_concurrency.lockutils [None req-078093ea-a1b4-4fb9-8cf3-140eab559c82 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.277 2 DEBUG oslo_concurrency.lockutils [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.277 2 DEBUG oslo_concurrency.lockutils [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.278 2 DEBUG oslo_concurrency.lockutils [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.278 2 DEBUG oslo_concurrency.lockutils [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.279 2 INFO nova.compute.manager [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Terminating instance#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.280 2 DEBUG nova.compute.manager [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.325 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:50:27 np0005480824 kernel: tapa8669959-69 (unregistering): left promiscuous mode
Oct 10 23:50:27 np0005480824 NetworkManager[44969]: <info>  [1760154627.3442] device (tapa8669959-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:50:27 np0005480824 ovn_controller[152667]: 2025-10-11T03:50:27Z|00085|binding|INFO|Releasing lport a8669959-69cb-4e7c-b708-25e90497b585 from this chassis (sb_readonly=0)
Oct 10 23:50:27 np0005480824 ovn_controller[152667]: 2025-10-11T03:50:27Z|00086|binding|INFO|Setting lport a8669959-69cb-4e7c-b708-25e90497b585 down in Southbound
Oct 10 23:50:27 np0005480824 ovn_controller[152667]: 2025-10-11T03:50:27Z|00087|binding|INFO|Removing iface tapa8669959-69 ovn-installed in OVS
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.369 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:e8:5f 10.100.0.14'], port_security=['fa:16:3e:23:e8:5f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ade49b15-ded3-459c-b92d-e98380bca4a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '633027d5948949cdb842dbb20e321e57', 'neutron:revision_number': '4', 'neutron:security_group_ids': '876c3a36-fc02-41b6-9ce4-5e10b6cd49ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8373a8b6-48b7-4c53-8c59-c606fca3db1d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=a8669959-69cb-4e7c-b708-25e90497b585) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.370 162245 INFO neutron.agent.ovn.metadata.agent [-] Port a8669959-69cb-4e7c-b708-25e90497b585 in datapath ea784d9f-5fea-4b2f-8a0a-4232f32d0fff unbound from our chassis#033[00m
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.371 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ea784d9f-5fea-4b2f-8a0a-4232f32d0fff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.373 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[20a08bdb-7026-441f-9376-e5465c2e9dbf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.373 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff namespace which is not needed anymore#033[00m
Oct 10 23:50:27 np0005480824 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 10 23:50:27 np0005480824 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 14.402s CPU time.
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:27 np0005480824 systemd-machined[215071]: Machine qemu-7-instance-00000007 terminated.
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.533 2 INFO nova.virt.libvirt.driver [-] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Instance destroyed successfully.#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.534 2 DEBUG nova.objects.instance [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lazy-loading 'resources' on Instance uuid ade49b15-ded3-459c-b92d-e98380bca4a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.579 2 DEBUG nova.virt.libvirt.vif [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:49:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1978752910',display_name='tempest-VolumesSnapshotTestJSON-instance-1978752910',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1978752910',id=7,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMU8TCWLsVgu9X2ftp+Ng4IlphsYIWSdZw5JSMp7bjp02XLW3tAVs9W9/OXkfeMr9/+RjE/RYUYyzgUoj2YF/yumt6KiJd52M+1yL9i3IcErJEAiSBWGAJXyrEDA+yRBvw==',key_name='tempest-keypair-1420679357',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:49:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='633027d5948949cdb842dbb20e321e57',ramdisk_id='',reservation_id='r-d06lmbi6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-62208921',owner_user_name='tempest-VolumesSnapshotTestJSON-62208921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:49:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d2bb1c00b7ba4686bb710314548ea5af',uuid=ade49b15-ded3-459c-b92d-e98380bca4a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a8669959-69cb-4e7c-b708-25e90497b585", "address": "fa:16:3e:23:e8:5f", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8669959-69", "ovs_interfaceid": "a8669959-69cb-4e7c-b708-25e90497b585", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.579 2 DEBUG nova.network.os_vif_util [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Converting VIF {"id": "a8669959-69cb-4e7c-b708-25e90497b585", "address": "fa:16:3e:23:e8:5f", "network": {"id": "ea784d9f-5fea-4b2f-8a0a-4232f32d0fff", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1590358830-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "633027d5948949cdb842dbb20e321e57", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8669959-69", "ovs_interfaceid": "a8669959-69cb-4e7c-b708-25e90497b585", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.580 2 DEBUG nova.network.os_vif_util [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:e8:5f,bridge_name='br-int',has_traffic_filtering=True,id=a8669959-69cb-4e7c-b708-25e90497b585,network=Network(ea784d9f-5fea-4b2f-8a0a-4232f32d0fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8669959-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.580 2 DEBUG os_vif [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:e8:5f,bridge_name='br-int',has_traffic_filtering=True,id=a8669959-69cb-4e7c-b708-25e90497b585,network=Network(ea784d9f-5fea-4b2f-8a0a-4232f32d0fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8669959-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8669959-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:27 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[275092]: [NOTICE]   (275096) : haproxy version is 2.8.14-c23fe91
Oct 10 23:50:27 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[275092]: [NOTICE]   (275096) : path to executable is /usr/sbin/haproxy
Oct 10 23:50:27 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[275092]: [WARNING]  (275096) : Exiting Master process...
Oct 10 23:50:27 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[275092]: [WARNING]  (275096) : Exiting Master process...
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.597 2 INFO os_vif [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:e8:5f,bridge_name='br-int',has_traffic_filtering=True,id=a8669959-69cb-4e7c-b708-25e90497b585,network=Network(ea784d9f-5fea-4b2f-8a0a-4232f32d0fff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8669959-69')#033[00m
Oct 10 23:50:27 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[275092]: [ALERT]    (275096) : Current worker (275098) exited with code 143 (Terminated)
Oct 10 23:50:27 np0005480824 neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff[275092]: [WARNING]  (275096) : All workers exited. Exiting... (0)
Oct 10 23:50:27 np0005480824 systemd[1]: libpod-6f91355e0af036f70431a16a047497a73a5d7f41ecb4fde2f7d533eb5221298b.scope: Deactivated successfully.
Oct 10 23:50:27 np0005480824 podman[275381]: 2025-10-11 03:50:27.606157605 +0000 UTC m=+0.060158154 container died 6f91355e0af036f70431a16a047497a73a5d7f41ecb4fde2f7d533eb5221298b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.646 2 DEBUG nova.compute.manager [req-d6c92987-9c7d-4f8f-adad-40961bc9135b req-b8b83e06-5643-4149-bd16-3023474c4727 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Received event network-vif-unplugged-a8669959-69cb-4e7c-b708-25e90497b585 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.647 2 DEBUG oslo_concurrency.lockutils [req-d6c92987-9c7d-4f8f-adad-40961bc9135b req-b8b83e06-5643-4149-bd16-3023474c4727 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.647 2 DEBUG oslo_concurrency.lockutils [req-d6c92987-9c7d-4f8f-adad-40961bc9135b req-b8b83e06-5643-4149-bd16-3023474c4727 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.647 2 DEBUG oslo_concurrency.lockutils [req-d6c92987-9c7d-4f8f-adad-40961bc9135b req-b8b83e06-5643-4149-bd16-3023474c4727 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:27 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f91355e0af036f70431a16a047497a73a5d7f41ecb4fde2f7d533eb5221298b-userdata-shm.mount: Deactivated successfully.
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.647 2 DEBUG nova.compute.manager [req-d6c92987-9c7d-4f8f-adad-40961bc9135b req-b8b83e06-5643-4149-bd16-3023474c4727 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] No waiting events found dispatching network-vif-unplugged-a8669959-69cb-4e7c-b708-25e90497b585 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.648 2 DEBUG nova.compute.manager [req-d6c92987-9c7d-4f8f-adad-40961bc9135b req-b8b83e06-5643-4149-bd16-3023474c4727 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Received event network-vif-unplugged-a8669959-69cb-4e7c-b708-25e90497b585 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:50:27 np0005480824 systemd[1]: var-lib-containers-storage-overlay-5d6e727cbdc95d1bd4701ae22a951ead4506e73a01a132a47fc1b6b53095d069-merged.mount: Deactivated successfully.
Oct 10 23:50:27 np0005480824 podman[275381]: 2025-10-11 03:50:27.655578275 +0000 UTC m=+0.109578824 container cleanup 6f91355e0af036f70431a16a047497a73a5d7f41ecb4fde2f7d533eb5221298b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.698 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:50:27 np0005480824 systemd[1]: libpod-conmon-6f91355e0af036f70431a16a047497a73a5d7f41ecb4fde2f7d533eb5221298b.scope: Deactivated successfully.
Oct 10 23:50:27 np0005480824 podman[275427]: 2025-10-11 03:50:27.746725886 +0000 UTC m=+0.062676743 container remove 6f91355e0af036f70431a16a047497a73a5d7f41ecb4fde2f7d533eb5221298b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.755 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c1cfbd84-8530-4547-9451-38f69d1881ae]: (4, ('Sat Oct 11 03:50:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff (6f91355e0af036f70431a16a047497a73a5d7f41ecb4fde2f7d533eb5221298b)\n6f91355e0af036f70431a16a047497a73a5d7f41ecb4fde2f7d533eb5221298b\nSat Oct 11 03:50:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff (6f91355e0af036f70431a16a047497a73a5d7f41ecb4fde2f7d533eb5221298b)\n6f91355e0af036f70431a16a047497a73a5d7f41ecb4fde2f7d533eb5221298b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.757 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e10e356f-2a09-4f00-ae65-8f2179273576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.758 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea784d9f-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:50:27 np0005480824 kernel: tapea784d9f-50: left promiscuous mode
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.767 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[66c9dc82-7188-4e41-971a-950084c4417e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:27 np0005480824 nova_compute[260089]: 2025-10-11 03:50:27.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.799 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[0be8159c-1d3b-4da2-a56f-55350daddadc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.801 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[39f59e11-befb-4de4-a118-11d7ee4dfc8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.827 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[74735bbc-5163-496c-89d3-4c01ae7be717]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406513, 'reachable_time': 18704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275443, 'error': None, 'target': 'ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.831 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ea784d9f-5fea-4b2f-8a0a-4232f32d0fff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.831 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[738fef54-0686-486d-912c-4f8fd00e47b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:50:27 np0005480824 systemd[1]: run-netns-ovnmeta\x2dea784d9f\x2d5fea\x2d4b2f\x2d8a0a\x2d4232f32d0fff.mount: Deactivated successfully.
Oct 10 23:50:27 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:27.832 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:50:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:50:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:50:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:50:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:50:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:50:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:50:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:50:27
Oct 10 23:50:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:50:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:50:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.control', 'backups', 'vms', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Oct 10 23:50:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:50:28 np0005480824 nova_compute[260089]: 2025-10-11 03:50:28.033 2 INFO nova.virt.libvirt.driver [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Deleting instance files /var/lib/nova/instances/ade49b15-ded3-459c-b92d-e98380bca4a4_del#033[00m
Oct 10 23:50:28 np0005480824 nova_compute[260089]: 2025-10-11 03:50:28.034 2 INFO nova.virt.libvirt.driver [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Deletion of /var/lib/nova/instances/ade49b15-ded3-459c-b92d-e98380bca4a4_del complete#033[00m
Oct 10 23:50:28 np0005480824 nova_compute[260089]: 2025-10-11 03:50:28.095 2 INFO nova.compute.manager [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:50:28 np0005480824 nova_compute[260089]: 2025-10-11 03:50:28.095 2 DEBUG oslo.service.loopingcall [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:50:28 np0005480824 nova_compute[260089]: 2025-10-11 03:50:28.096 2 DEBUG nova.compute.manager [-] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:50:28 np0005480824 nova_compute[260089]: 2025-10-11 03:50:28.096 2 DEBUG nova.network.neutron [-] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:50:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:50:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:50:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:50:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:50:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:50:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:50:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:50:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:50:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:50:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:50:28 np0005480824 nova_compute[260089]: 2025-10-11 03:50:28.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 121 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.6 KiB/s wr, 198 op/s
Oct 10 23:50:28 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:50:28.834 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:50:29 np0005480824 podman[275445]: 2025-10-11 03:50:29.116336632 +0000 UTC m=+0.155034182 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.125 2 DEBUG nova.network.neutron [-] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.141 2 INFO nova.compute.manager [-] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Took 1.04 seconds to deallocate network for instance.#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.237 2 WARNING nova.volume.cinder [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Attachment c96405ca-a46d-4de1-89dc-6a55cd3d86de does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = c96405ca-a46d-4de1-89dc-6a55cd3d86de. (HTTP 404) (Request-ID: req-6028c08a-cc7d-43d7-a2e9-ac803f6dcfc1)#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.238 2 INFO nova.compute.manager [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Took 0.10 seconds to detach 1 volumes for instance.#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.288 2 DEBUG oslo_concurrency.lockutils [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.289 2 DEBUG oslo_concurrency.lockutils [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.353 2 DEBUG oslo_concurrency.processutils [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:50:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:50:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Oct 10 23:50:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Oct 10 23:50:29 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.730 2 DEBUG nova.compute.manager [req-4991c8bd-0a03-4329-a5c9-71da3f163093 req-463eb5d2-84d9-461b-9b79-0dd9c9e11313 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Received event network-vif-plugged-a8669959-69cb-4e7c-b708-25e90497b585 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.731 2 DEBUG oslo_concurrency.lockutils [req-4991c8bd-0a03-4329-a5c9-71da3f163093 req-463eb5d2-84d9-461b-9b79-0dd9c9e11313 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.731 2 DEBUG oslo_concurrency.lockutils [req-4991c8bd-0a03-4329-a5c9-71da3f163093 req-463eb5d2-84d9-461b-9b79-0dd9c9e11313 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.732 2 DEBUG oslo_concurrency.lockutils [req-4991c8bd-0a03-4329-a5c9-71da3f163093 req-463eb5d2-84d9-461b-9b79-0dd9c9e11313 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.732 2 DEBUG nova.compute.manager [req-4991c8bd-0a03-4329-a5c9-71da3f163093 req-463eb5d2-84d9-461b-9b79-0dd9c9e11313 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] No waiting events found dispatching network-vif-plugged-a8669959-69cb-4e7c-b708-25e90497b585 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.732 2 WARNING nova.compute.manager [req-4991c8bd-0a03-4329-a5c9-71da3f163093 req-463eb5d2-84d9-461b-9b79-0dd9c9e11313 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Received unexpected event network-vif-plugged-a8669959-69cb-4e7c-b708-25e90497b585 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.732 2 DEBUG nova.compute.manager [req-4991c8bd-0a03-4329-a5c9-71da3f163093 req-463eb5d2-84d9-461b-9b79-0dd9c9e11313 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Received event network-vif-deleted-a8669959-69cb-4e7c-b708-25e90497b585 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:50:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:50:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1711969246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.829 2 DEBUG oslo_concurrency.processutils [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.835 2 DEBUG nova.compute.provider_tree [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.854 2 DEBUG nova.scheduler.client.report [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.878 2 DEBUG oslo_concurrency.lockutils [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.910 2 INFO nova.scheduler.client.report [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Deleted allocations for instance ade49b15-ded3-459c-b92d-e98380bca4a4#033[00m
Oct 10 23:50:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:50:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 48K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3506 syncs, 3.41 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6324 writes, 24K keys, 6324 commit groups, 1.0 writes per commit group, ingest: 14.96 MB, 0.02 MB/s#012Interval WAL: 6324 writes, 2622 syncs, 2.41 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 23:50:29 np0005480824 nova_compute[260089]: 2025-10-11 03:50:29.973 2 DEBUG oslo_concurrency.lockutils [None req-8e5a8696-1eee-4835-9dd5-7ca6db601fd6 d2bb1c00b7ba4686bb710314548ea5af 633027d5948949cdb842dbb20e321e57 - - default default] Lock "ade49b15-ded3-459c-b92d-e98380bca4a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:30 np0005480824 nova_compute[260089]: 2025-10-11 03:50:30.292 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:50:30 np0005480824 nova_compute[260089]: 2025-10-11 03:50:30.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:50:30 np0005480824 nova_compute[260089]: 2025-10-11 03:50:30.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:50:30 np0005480824 nova_compute[260089]: 2025-10-11 03:50:30.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:50:30 np0005480824 nova_compute[260089]: 2025-10-11 03:50:30.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:50:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:50:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2153731740' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:50:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 121 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.8 KiB/s wr, 74 op/s
Oct 10 23:50:31 np0005480824 nova_compute[260089]: 2025-10-11 03:50:31.576 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154616.5747032, 266aeb27-7f54-4255-9018-0b6092629b80 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:50:31 np0005480824 nova_compute[260089]: 2025-10-11 03:50:31.577 2 INFO nova.compute.manager [-] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:50:31 np0005480824 nova_compute[260089]: 2025-10-11 03:50:31.593 2 DEBUG nova.compute.manager [None req-c6bf93f1-4baf-4805-94d2-f856e91b32bb - - - - - -] [instance: 266aeb27-7f54-4255-9018-0b6092629b80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:50:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Oct 10 23:50:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Oct 10 23:50:31 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.318 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.318 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.337 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.337 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.338 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.338 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.338 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:50:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:50:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:50:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:50:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 123 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 MiB/s rd, 3.7 MiB/s wr, 190 op/s
Oct 10 23:50:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Oct 10 23:50:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Oct 10 23:50:32 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Oct 10 23:50:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:50:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1011616641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:50:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:50:32 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.763 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.986 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.988 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4591MB free_disk=59.94271469116211GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.989 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:50:32 np0005480824 nova_compute[260089]: 2025-10-11 03:50:32.989 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:50:33 np0005480824 nova_compute[260089]: 2025-10-11 03:50:33.044 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:50:33 np0005480824 nova_compute[260089]: 2025-10-11 03:50:33.045 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:50:33 np0005480824 nova_compute[260089]: 2025-10-11 03:50:33.067 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/475630765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:50:33 np0005480824 nova_compute[260089]: 2025-10-11 03:50:33.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:33 np0005480824 nova_compute[260089]: 2025-10-11 03:50:33.548 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:50:33 np0005480824 nova_compute[260089]: 2025-10-11 03:50:33.560 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:50:33 np0005480824 nova_compute[260089]: 2025-10-11 03:50:33.581 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:50:33 np0005480824 nova_compute[260089]: 2025-10-11 03:50:33.604 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:50:33 np0005480824 nova_compute[260089]: 2025-10-11 03:50:33.605 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.755050) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154633755094, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2384, "num_deletes": 270, "total_data_size": 3355876, "memory_usage": 3421136, "flush_reason": "Manual Compaction"}
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154633770038, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3289189, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21213, "largest_seqno": 23596, "table_properties": {"data_size": 3278352, "index_size": 6947, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23694, "raw_average_key_size": 21, "raw_value_size": 3256200, "raw_average_value_size": 2936, "num_data_blocks": 304, "num_entries": 1109, "num_filter_entries": 1109, "num_deletions": 270, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154469, "oldest_key_time": 1760154469, "file_creation_time": 1760154633, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 15041 microseconds, and 7172 cpu microseconds.
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.770090) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3289189 bytes OK
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.770114) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.772678) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.772696) EVENT_LOG_v1 {"time_micros": 1760154633772689, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.772716) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3345530, prev total WAL file size 3345530, number of live WAL files 2.
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.774571) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3212KB)], [50(7267KB)]
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154633774646, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10730750, "oldest_snapshot_seqno": -1}
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5032 keys, 8984233 bytes, temperature: kUnknown
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154633818495, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8984233, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8947246, "index_size": 23309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 123969, "raw_average_key_size": 24, "raw_value_size": 8853075, "raw_average_value_size": 1759, "num_data_blocks": 967, "num_entries": 5032, "num_filter_entries": 5032, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760154633, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.818719) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8984233 bytes
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.820150) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 244.3 rd, 204.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.1 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5571, records dropped: 539 output_compression: NoCompression
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.820172) EVENT_LOG_v1 {"time_micros": 1760154633820161, "job": 26, "event": "compaction_finished", "compaction_time_micros": 43920, "compaction_time_cpu_micros": 25389, "output_level": 6, "num_output_files": 1, "total_output_size": 8984233, "num_input_records": 5571, "num_output_records": 5032, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154633820954, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154633822463, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.774470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.822557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.822564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.822567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.822569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:50:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:50:33.822572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:50:34 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev dbe2ae65-60ed-4342-8284-98598ae43ed3 does not exist
Oct 10 23:50:34 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev ec0dd1d1-8366-409d-acae-98e2ed2678a4 does not exist
Oct 10 23:50:34 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev b546113b-9b08-41a1-91c9-f1bdac8c7bc7 does not exist
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:50:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:50:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 124 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 6.1 MiB/s wr, 187 op/s
Oct 10 23:50:34 np0005480824 podman[276046]: 2025-10-11 03:50:34.963204896 +0000 UTC m=+0.080013930 container create 7507e07f8cfb527bcee40da8f6344ffd42affaf97e21a413169d85e30e8a8693 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:50:35 np0005480824 systemd[1]: Started libpod-conmon-7507e07f8cfb527bcee40da8f6344ffd42affaf97e21a413169d85e30e8a8693.scope.
Oct 10 23:50:35 np0005480824 podman[276046]: 2025-10-11 03:50:34.937244086 +0000 UTC m=+0.054053210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:50:35 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:50:35 np0005480824 podman[276046]: 2025-10-11 03:50:35.092841619 +0000 UTC m=+0.209650733 container init 7507e07f8cfb527bcee40da8f6344ffd42affaf97e21a413169d85e30e8a8693 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:50:35 np0005480824 podman[276046]: 2025-10-11 03:50:35.109460709 +0000 UTC m=+0.226269773 container start 7507e07f8cfb527bcee40da8f6344ffd42affaf97e21a413169d85e30e8a8693 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:50:35 np0005480824 podman[276046]: 2025-10-11 03:50:35.114481158 +0000 UTC m=+0.231290222 container attach 7507e07f8cfb527bcee40da8f6344ffd42affaf97e21a413169d85e30e8a8693 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 10 23:50:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:50:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:50:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:50:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:50:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:50:35 np0005480824 beautiful_noether[276062]: 167 167
Oct 10 23:50:35 np0005480824 systemd[1]: libpod-7507e07f8cfb527bcee40da8f6344ffd42affaf97e21a413169d85e30e8a8693.scope: Deactivated successfully.
Oct 10 23:50:35 np0005480824 podman[276046]: 2025-10-11 03:50:35.125310241 +0000 UTC m=+0.242119315 container died 7507e07f8cfb527bcee40da8f6344ffd42affaf97e21a413169d85e30e8a8693 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 10 23:50:35 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c41bdd8126ace9e1ffc3f01ebc2a30038233e7eb16e4fc8e2a10e44cbb43667b-merged.mount: Deactivated successfully.
Oct 10 23:50:35 np0005480824 podman[276046]: 2025-10-11 03:50:35.184589474 +0000 UTC m=+0.301398538 container remove 7507e07f8cfb527bcee40da8f6344ffd42affaf97e21a413169d85e30e8a8693 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:50:35 np0005480824 systemd[1]: libpod-conmon-7507e07f8cfb527bcee40da8f6344ffd42affaf97e21a413169d85e30e8a8693.scope: Deactivated successfully.
Oct 10 23:50:35 np0005480824 podman[276068]: 2025-10-11 03:50:35.267832237 +0000 UTC m=+0.096699781 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct 10 23:50:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:50:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/832926978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:50:35 np0005480824 podman[276105]: 2025-10-11 03:50:35.410269272 +0000 UTC m=+0.056995540 container create d13aff6dc13abd855416adf31b8b3b3254df67d002d8518c97c354a6cb13dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 10 23:50:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:50:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 13K writes, 53K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 3844 syncs, 3.39 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6135 writes, 24K keys, 6135 commit groups, 1.0 writes per commit group, ingest: 13.38 MB, 0.02 MB/s#012Interval WAL: 6135 writes, 2623 syncs, 2.34 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 23:50:35 np0005480824 systemd[1]: Started libpod-conmon-d13aff6dc13abd855416adf31b8b3b3254df67d002d8518c97c354a6cb13dcb8.scope.
Oct 10 23:50:35 np0005480824 podman[276105]: 2025-10-11 03:50:35.381862355 +0000 UTC m=+0.028588683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:50:35 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:50:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de2dcd752966570d31a67d449e0ed659c7ba30b7d927518d85864210d72b097/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:50:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de2dcd752966570d31a67d449e0ed659c7ba30b7d927518d85864210d72b097/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:50:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de2dcd752966570d31a67d449e0ed659c7ba30b7d927518d85864210d72b097/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:50:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de2dcd752966570d31a67d449e0ed659c7ba30b7d927518d85864210d72b097/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:50:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de2dcd752966570d31a67d449e0ed659c7ba30b7d927518d85864210d72b097/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:50:35 np0005480824 podman[276105]: 2025-10-11 03:50:35.515025202 +0000 UTC m=+0.161751450 container init d13aff6dc13abd855416adf31b8b3b3254df67d002d8518c97c354a6cb13dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:50:35 np0005480824 podman[276105]: 2025-10-11 03:50:35.53198278 +0000 UTC m=+0.178709038 container start d13aff6dc13abd855416adf31b8b3b3254df67d002d8518c97c354a6cb13dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 10 23:50:35 np0005480824 podman[276105]: 2025-10-11 03:50:35.536172978 +0000 UTC m=+0.182899206 container attach d13aff6dc13abd855416adf31b8b3b3254df67d002d8518c97c354a6cb13dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 10 23:50:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Oct 10 23:50:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Oct 10 23:50:36 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Oct 10 23:50:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 124 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 6.1 MiB/s wr, 187 op/s
Oct 10 23:50:36 np0005480824 youthful_bohr[276122]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:50:36 np0005480824 youthful_bohr[276122]: --> relative data size: 1.0
Oct 10 23:50:36 np0005480824 youthful_bohr[276122]: --> All data devices are unavailable
Oct 10 23:50:36 np0005480824 systemd[1]: libpod-d13aff6dc13abd855416adf31b8b3b3254df67d002d8518c97c354a6cb13dcb8.scope: Deactivated successfully.
Oct 10 23:50:36 np0005480824 systemd[1]: libpod-d13aff6dc13abd855416adf31b8b3b3254df67d002d8518c97c354a6cb13dcb8.scope: Consumed 1.173s CPU time.
Oct 10 23:50:36 np0005480824 podman[276105]: 2025-10-11 03:50:36.804374173 +0000 UTC m=+1.451100401 container died d13aff6dc13abd855416adf31b8b3b3254df67d002d8518c97c354a6cb13dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:50:36 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9de2dcd752966570d31a67d449e0ed659c7ba30b7d927518d85864210d72b097-merged.mount: Deactivated successfully.
Oct 10 23:50:36 np0005480824 podman[276105]: 2025-10-11 03:50:36.889812539 +0000 UTC m=+1.536538767 container remove d13aff6dc13abd855416adf31b8b3b3254df67d002d8518c97c354a6cb13dcb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bohr, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:50:36 np0005480824 systemd[1]: libpod-conmon-d13aff6dc13abd855416adf31b8b3b3254df67d002d8518c97c354a6cb13dcb8.scope: Deactivated successfully.
Oct 10 23:50:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/399177484' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/399177484' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Oct 10 23:50:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Oct 10 23:50:37 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Oct 10 23:50:37 np0005480824 nova_compute[260089]: 2025-10-11 03:50:37.585 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:50:37 np0005480824 nova_compute[260089]: 2025-10-11 03:50:37.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:37 np0005480824 podman[276302]: 2025-10-11 03:50:37.770949336 +0000 UTC m=+0.064075075 container create 5d9969dc2645cb28b6cb8ece354b744b5ffe890dda22e218f7166e090f8b7af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 23:50:37 np0005480824 systemd[1]: Started libpod-conmon-5d9969dc2645cb28b6cb8ece354b744b5ffe890dda22e218f7166e090f8b7af9.scope.
Oct 10 23:50:37 np0005480824 podman[276302]: 2025-10-11 03:50:37.752706028 +0000 UTC m=+0.045831787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:50:37 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:50:37 np0005480824 podman[276302]: 2025-10-11 03:50:37.886320155 +0000 UTC m=+0.179445944 container init 5d9969dc2645cb28b6cb8ece354b744b5ffe890dda22e218f7166e090f8b7af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:50:37 np0005480824 podman[276302]: 2025-10-11 03:50:37.894824594 +0000 UTC m=+0.187950333 container start 5d9969dc2645cb28b6cb8ece354b744b5ffe890dda22e218f7166e090f8b7af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 10 23:50:37 np0005480824 podman[276302]: 2025-10-11 03:50:37.8988869 +0000 UTC m=+0.192012699 container attach 5d9969dc2645cb28b6cb8ece354b744b5ffe890dda22e218f7166e090f8b7af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 23:50:37 np0005480824 cranky_diffie[276318]: 167 167
Oct 10 23:50:37 np0005480824 systemd[1]: libpod-5d9969dc2645cb28b6cb8ece354b744b5ffe890dda22e218f7166e090f8b7af9.scope: Deactivated successfully.
Oct 10 23:50:37 np0005480824 podman[276302]: 2025-10-11 03:50:37.904068812 +0000 UTC m=+0.197194561 container died 5d9969dc2645cb28b6cb8ece354b744b5ffe890dda22e218f7166e090f8b7af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 10 23:50:37 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3452e55017f6a2432191b54f94bc49831a1d955a8e4bb9fc13092433c39fcf9b-merged.mount: Deactivated successfully.
Oct 10 23:50:37 np0005480824 podman[276302]: 2025-10-11 03:50:37.951481595 +0000 UTC m=+0.244607344 container remove 5d9969dc2645cb28b6cb8ece354b744b5ffe890dda22e218f7166e090f8b7af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003934357092856678 of space, bias 1.0, pg target 0.11803071278570033 quantized to 32 (current 32)
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0003460606319593671 of space, bias 1.0, pg target 0.10381818958781013 quantized to 32 (current 32)
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:50:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:50:37 np0005480824 systemd[1]: libpod-conmon-5d9969dc2645cb28b6cb8ece354b744b5ffe890dda22e218f7166e090f8b7af9.scope: Deactivated successfully.
Oct 10 23:50:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1085624078' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1085624078' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Oct 10 23:50:38 np0005480824 podman[276342]: 2025-10-11 03:50:38.185853757 +0000 UTC m=+0.056935147 container create 822cd5725d1d09ed273b476596ff70ec41aea85645145c3f8fe605d0193a2064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:50:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Oct 10 23:50:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Oct 10 23:50:38 np0005480824 systemd[1]: Started libpod-conmon-822cd5725d1d09ed273b476596ff70ec41aea85645145c3f8fe605d0193a2064.scope.
Oct 10 23:50:38 np0005480824 podman[276342]: 2025-10-11 03:50:38.15914266 +0000 UTC m=+0.030224050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:50:38 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:50:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/414b10de0920620e83f2a34cd05b3b9d004ff045f3682633198adc2895f33eea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:50:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/414b10de0920620e83f2a34cd05b3b9d004ff045f3682633198adc2895f33eea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:50:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/414b10de0920620e83f2a34cd05b3b9d004ff045f3682633198adc2895f33eea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:50:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/414b10de0920620e83f2a34cd05b3b9d004ff045f3682633198adc2895f33eea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:50:38 np0005480824 podman[276342]: 2025-10-11 03:50:38.291782825 +0000 UTC m=+0.162864195 container init 822cd5725d1d09ed273b476596ff70ec41aea85645145c3f8fe605d0193a2064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:50:38 np0005480824 podman[276342]: 2025-10-11 03:50:38.306648664 +0000 UTC m=+0.177730034 container start 822cd5725d1d09ed273b476596ff70ec41aea85645145c3f8fe605d0193a2064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:50:38 np0005480824 podman[276342]: 2025-10-11 03:50:38.312256416 +0000 UTC m=+0.183337766 container attach 822cd5725d1d09ed273b476596ff70ec41aea85645145c3f8fe605d0193a2064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:50:38 np0005480824 nova_compute[260089]: 2025-10-11 03:50:38.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 205 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 6.5 MiB/s wr, 232 op/s
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]: {
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:    "0": [
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:        {
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "devices": [
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "/dev/loop3"
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            ],
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_name": "ceph_lv0",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_size": "21470642176",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "name": "ceph_lv0",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "tags": {
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.cluster_name": "ceph",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.crush_device_class": "",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.encrypted": "0",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.osd_id": "0",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.type": "block",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.vdo": "0"
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            },
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "type": "block",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "vg_name": "ceph_vg0"
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:        }
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:    ],
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:    "1": [
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:        {
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "devices": [
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "/dev/loop4"
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            ],
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_name": "ceph_lv1",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_size": "21470642176",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "name": "ceph_lv1",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "tags": {
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.cluster_name": "ceph",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.crush_device_class": "",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.encrypted": "0",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.osd_id": "1",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.type": "block",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.vdo": "0"
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            },
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "type": "block",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "vg_name": "ceph_vg1"
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:        }
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:    ],
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:    "2": [
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:        {
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "devices": [
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "/dev/loop5"
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            ],
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_name": "ceph_lv2",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_size": "21470642176",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "name": "ceph_lv2",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "tags": {
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.cluster_name": "ceph",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.crush_device_class": "",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.encrypted": "0",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.osd_id": "2",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.type": "block",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:                "ceph.vdo": "0"
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            },
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "type": "block",
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:            "vg_name": "ceph_vg2"
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:        }
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]:    ]
Oct 10 23:50:39 np0005480824 eloquent_cartwright[276358]: }
Oct 10 23:50:39 np0005480824 systemd[1]: libpod-822cd5725d1d09ed273b476596ff70ec41aea85645145c3f8fe605d0193a2064.scope: Deactivated successfully.
Oct 10 23:50:39 np0005480824 podman[276342]: 2025-10-11 03:50:39.133969007 +0000 UTC m=+1.005050367 container died 822cd5725d1d09ed273b476596ff70ec41aea85645145c3f8fe605d0193a2064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 10 23:50:39 np0005480824 systemd[1]: var-lib-containers-storage-overlay-414b10de0920620e83f2a34cd05b3b9d004ff045f3682633198adc2895f33eea-merged.mount: Deactivated successfully.
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Oct 10 23:50:39 np0005480824 podman[276342]: 2025-10-11 03:50:39.209359348 +0000 UTC m=+1.080440708 container remove 822cd5725d1d09ed273b476596ff70ec41aea85645145c3f8fe605d0193a2064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Oct 10 23:50:39 np0005480824 systemd[1]: libpod-conmon-822cd5725d1d09ed273b476596ff70ec41aea85645145c3f8fe605d0193a2064.scope: Deactivated successfully.
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3164936943' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3164936943' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/93034612' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/93034612' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3712498702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3712498702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:40 np0005480824 podman[276521]: 2025-10-11 03:50:40.194844125 +0000 UTC m=+0.070662269 container create c93261a5f6b0afbd1b8290ec45b39a7f6f92b67e95d8ad3a2adc6b827fe643d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct 10 23:50:40 np0005480824 systemd[1]: Started libpod-conmon-c93261a5f6b0afbd1b8290ec45b39a7f6f92b67e95d8ad3a2adc6b827fe643d4.scope.
Oct 10 23:50:40 np0005480824 podman[276521]: 2025-10-11 03:50:40.172509881 +0000 UTC m=+0.048328015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:50:40 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:50:40 np0005480824 podman[276521]: 2025-10-11 03:50:40.298655142 +0000 UTC m=+0.174473276 container init c93261a5f6b0afbd1b8290ec45b39a7f6f92b67e95d8ad3a2adc6b827fe643d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct 10 23:50:40 np0005480824 podman[276521]: 2025-10-11 03:50:40.305230757 +0000 UTC m=+0.181048861 container start c93261a5f6b0afbd1b8290ec45b39a7f6f92b67e95d8ad3a2adc6b827fe643d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:50:40 np0005480824 podman[276521]: 2025-10-11 03:50:40.309023296 +0000 UTC m=+0.184841400 container attach c93261a5f6b0afbd1b8290ec45b39a7f6f92b67e95d8ad3a2adc6b827fe643d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 10 23:50:40 np0005480824 condescending_stonebraker[276537]: 167 167
Oct 10 23:50:40 np0005480824 systemd[1]: libpod-c93261a5f6b0afbd1b8290ec45b39a7f6f92b67e95d8ad3a2adc6b827fe643d4.scope: Deactivated successfully.
Oct 10 23:50:40 np0005480824 conmon[276537]: conmon c93261a5f6b0afbd1b82 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c93261a5f6b0afbd1b8290ec45b39a7f6f92b67e95d8ad3a2adc6b827fe643d4.scope/container/memory.events
Oct 10 23:50:40 np0005480824 podman[276521]: 2025-10-11 03:50:40.318688273 +0000 UTC m=+0.194506437 container died c93261a5f6b0afbd1b8290ec45b39a7f6f92b67e95d8ad3a2adc6b827fe643d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 23:50:40 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6bfdf19237af49012ce0524560a69cc40b6a6a7cda9f7c43dcbdd2644809d17b-merged.mount: Deactivated successfully.
Oct 10 23:50:40 np0005480824 podman[276521]: 2025-10-11 03:50:40.361855226 +0000 UTC m=+0.237673330 container remove c93261a5f6b0afbd1b8290ec45b39a7f6f92b67e95d8ad3a2adc6b827fe643d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 23:50:40 np0005480824 systemd[1]: libpod-conmon-c93261a5f6b0afbd1b8290ec45b39a7f6f92b67e95d8ad3a2adc6b827fe643d4.scope: Deactivated successfully.
Oct 10 23:50:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:50:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 44K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 3006 syncs, 3.54 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4991 writes, 20K keys, 4991 commit groups, 1.0 writes per commit group, ingest: 10.91 MB, 0.02 MB/s#012Interval WAL: 4991 writes, 2137 syncs, 2.34 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 23:50:40 np0005480824 podman[276561]: 2025-10-11 03:50:40.549154584 +0000 UTC m=+0.071120331 container create afc35f71720e2497ef9192a0fb8a2d0ddb94a9f5341b417d685497cca788fb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kare, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 10 23:50:40 np0005480824 systemd[1]: Started libpod-conmon-afc35f71720e2497ef9192a0fb8a2d0ddb94a9f5341b417d685497cca788fb81.scope.
Oct 10 23:50:40 np0005480824 podman[276561]: 2025-10-11 03:50:40.518626107 +0000 UTC m=+0.040591894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:50:40 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:50:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395734f7cbb93bbb6e8caec4b1e1831c14c50792587126d9549de90c334e0196/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:50:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395734f7cbb93bbb6e8caec4b1e1831c14c50792587126d9549de90c334e0196/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:50:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395734f7cbb93bbb6e8caec4b1e1831c14c50792587126d9549de90c334e0196/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:50:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395734f7cbb93bbb6e8caec4b1e1831c14c50792587126d9549de90c334e0196/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:50:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 205 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 8.6 MiB/s wr, 306 op/s
Oct 10 23:50:40 np0005480824 podman[276561]: 2025-10-11 03:50:40.641267997 +0000 UTC m=+0.163233704 container init afc35f71720e2497ef9192a0fb8a2d0ddb94a9f5341b417d685497cca788fb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:50:40 np0005480824 podman[276561]: 2025-10-11 03:50:40.652803447 +0000 UTC m=+0.174769154 container start afc35f71720e2497ef9192a0fb8a2d0ddb94a9f5341b417d685497cca788fb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 23:50:40 np0005480824 podman[276561]: 2025-10-11 03:50:40.656438553 +0000 UTC m=+0.178404260 container attach afc35f71720e2497ef9192a0fb8a2d0ddb94a9f5341b417d685497cca788fb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:50:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Oct 10 23:50:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Oct 10 23:50:41 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Oct 10 23:50:41 np0005480824 distracted_kare[276577]: {
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "osd_id": 0,
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "type": "bluestore"
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:    },
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "osd_id": 1,
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "type": "bluestore"
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:    },
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "osd_id": 2,
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:        "type": "bluestore"
Oct 10 23:50:41 np0005480824 distracted_kare[276577]:    }
Oct 10 23:50:41 np0005480824 distracted_kare[276577]: }
Oct 10 23:50:41 np0005480824 systemd[1]: libpod-afc35f71720e2497ef9192a0fb8a2d0ddb94a9f5341b417d685497cca788fb81.scope: Deactivated successfully.
Oct 10 23:50:41 np0005480824 systemd[1]: libpod-afc35f71720e2497ef9192a0fb8a2d0ddb94a9f5341b417d685497cca788fb81.scope: Consumed 1.083s CPU time.
Oct 10 23:50:41 np0005480824 podman[276561]: 2025-10-11 03:50:41.741170201 +0000 UTC m=+1.263135908 container died afc35f71720e2497ef9192a0fb8a2d0ddb94a9f5341b417d685497cca788fb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kare, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:50:41 np0005480824 systemd[1]: var-lib-containers-storage-overlay-395734f7cbb93bbb6e8caec4b1e1831c14c50792587126d9549de90c334e0196-merged.mount: Deactivated successfully.
Oct 10 23:50:41 np0005480824 podman[276561]: 2025-10-11 03:50:41.79654441 +0000 UTC m=+1.318510127 container remove afc35f71720e2497ef9192a0fb8a2d0ddb94a9f5341b417d685497cca788fb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 10 23:50:41 np0005480824 systemd[1]: libpod-conmon-afc35f71720e2497ef9192a0fb8a2d0ddb94a9f5341b417d685497cca788fb81.scope: Deactivated successfully.
Oct 10 23:50:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:50:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:50:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:50:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:50:41 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 2bfaab6d-f7be-4b92-a0c4-fcbaa1ea52ab does not exist
Oct 10 23:50:41 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev d92ea66b-9409-489e-b0d0-b8a6abe57d03 does not exist
Oct 10 23:50:42 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:50:42 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:50:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Oct 10 23:50:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Oct 10 23:50:42 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Oct 10 23:50:42 np0005480824 nova_compute[260089]: 2025-10-11 03:50:42.531 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154627.5298965, ade49b15-ded3-459c-b92d-e98380bca4a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:50:42 np0005480824 nova_compute[260089]: 2025-10-11 03:50:42.533 2 INFO nova.compute.manager [-] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:50:42 np0005480824 nova_compute[260089]: 2025-10-11 03:50:42.552 2 DEBUG nova.compute.manager [None req-b6a933aa-8d88-4447-9035-f407580e22cb - - - - - -] [instance: ade49b15-ded3-459c-b92d-e98380bca4a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:50:42 np0005480824 nova_compute[260089]: 2025-10-11 03:50:42.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 165 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.3 MiB/s wr, 227 op/s
Oct 10 23:50:42 np0005480824 ceph-mgr[74617]: [devicehealth INFO root] Check health
Oct 10 23:50:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Oct 10 23:50:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Oct 10 23:50:43 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Oct 10 23:50:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/368581107' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/368581107' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:43 np0005480824 nova_compute[260089]: 2025-10-11 03:50:43.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2088432791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2088432791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:50:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 134 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 894 KiB/s rd, 1.8 MiB/s wr, 251 op/s
Oct 10 23:50:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1211236224' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1211236224' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2982908919' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2982908919' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:50:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3159264180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:50:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 134 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 805 KiB/s rd, 1.7 MiB/s wr, 226 op/s
Oct 10 23:50:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Oct 10 23:50:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Oct 10 23:50:47 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Oct 10 23:50:47 np0005480824 nova_compute[260089]: 2025-10-11 03:50:47.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Oct 10 23:50:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Oct 10 23:50:48 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Oct 10 23:50:48 np0005480824 nova_compute[260089]: 2025-10-11 03:50:48.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 201 KiB/s rd, 12 KiB/s wr, 270 op/s
Oct 10 23:50:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:50:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Oct 10 23:50:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Oct 10 23:50:49 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Oct 10 23:50:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1609389978' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1609389978' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3385405896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3385405896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 10 KiB/s wr, 212 op/s
Oct 10 23:50:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:50:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3294902720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:50:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1785645343' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1785645343' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Oct 10 23:50:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Oct 10 23:50:52 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Oct 10 23:50:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 7.3 KiB/s wr, 199 op/s
Oct 10 23:50:52 np0005480824 nova_compute[260089]: 2025-10-11 03:50:52.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:53 np0005480824 podman[276675]: 2025-10-11 03:50:53.048802014 +0000 UTC m=+0.090726702 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:50:53 np0005480824 podman[276674]: 2025-10-11 03:50:53.055653925 +0000 UTC m=+0.103127693 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 10 23:50:53 np0005480824 nova_compute[260089]: 2025-10-11 03:50:53.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Oct 10 23:50:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Oct 10 23:50:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Oct 10 23:50:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3264474279' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3264474279' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:54 np0005480824 nova_compute[260089]: 2025-10-11 03:50:54.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:54 np0005480824 nova_compute[260089]: 2025-10-11 03:50:54.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:50:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Oct 10 23:50:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Oct 10 23:50:54 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Oct 10 23:50:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 6.1 KiB/s wr, 167 op/s
Oct 10 23:50:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Oct 10 23:50:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Oct 10 23:50:55 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Oct 10 23:50:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 88 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:50:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Oct 10 23:50:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Oct 10 23:50:57 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Oct 10 23:50:57 np0005480824 nova_compute[260089]: 2025-10-11 03:50:57.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:50:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:50:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:50:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:50:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:50:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:50:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:50:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2775070323' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:50:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:50:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2775070323' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:50:58 np0005480824 nova_compute[260089]: 2025-10-11 03:50:58.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:50:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 3.4 KiB/s wr, 122 op/s
Oct 10 23:50:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:50:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Oct 10 23:50:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Oct 10 23:50:59 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Oct 10 23:51:00 np0005480824 podman[276715]: 2025-10-11 03:51:00.109251261 +0000 UTC m=+0.159849804 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:51:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 2.8 KiB/s wr, 103 op/s
Oct 10 23:51:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Oct 10 23:51:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Oct 10 23:51:01 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Oct 10 23:51:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 103 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 872 KiB/s wr, 224 op/s
Oct 10 23:51:02 np0005480824 nova_compute[260089]: 2025-10-11 03:51:02.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:51:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2377841585' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:51:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:51:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2377841585' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:51:03 np0005480824 nova_compute[260089]: 2025-10-11 03:51:03.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:04 np0005480824 nova_compute[260089]: 2025-10-11 03:51:04.460 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:04 np0005480824 nova_compute[260089]: 2025-10-11 03:51:04.460 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:04 np0005480824 nova_compute[260089]: 2025-10-11 03:51:04.483 2 DEBUG nova.compute.manager [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:51:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:51:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Oct 10 23:51:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Oct 10 23:51:04 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Oct 10 23:51:04 np0005480824 nova_compute[260089]: 2025-10-11 03:51:04.594 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:04 np0005480824 nova_compute[260089]: 2025-10-11 03:51:04.595 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:04 np0005480824 nova_compute[260089]: 2025-10-11 03:51:04.606 2 DEBUG nova.virt.hardware [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:51:04 np0005480824 nova_compute[260089]: 2025-10-11 03:51:04.607 2 INFO nova.compute.claims [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:51:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 103 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 869 KiB/s wr, 121 op/s
Oct 10 23:51:04 np0005480824 nova_compute[260089]: 2025-10-11 03:51:04.763 2 DEBUG oslo_concurrency.processutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:51:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561950790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.219 2 DEBUG oslo_concurrency.processutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.228 2 DEBUG nova.compute.provider_tree [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.243 2 DEBUG nova.scheduler.client.report [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.260 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.260 2 DEBUG nova.compute.manager [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.318 2 DEBUG nova.compute.manager [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.319 2 DEBUG nova.network.neutron [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.338 2 INFO nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.363 2 DEBUG nova.compute.manager [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.460 2 DEBUG nova.compute.manager [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.463 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.463 2 INFO nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Creating image(s)#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.497 2 DEBUG nova.storage.rbd_utils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] rbd image b11faa30-2b52-45e0-b5f2-dd05b5050493_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.529 2 DEBUG nova.storage.rbd_utils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] rbd image b11faa30-2b52-45e0-b5f2-dd05b5050493_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.552 2 DEBUG nova.storage.rbd_utils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] rbd image b11faa30-2b52-45e0-b5f2-dd05b5050493_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.555 2 DEBUG oslo_concurrency.processutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.635 2 DEBUG oslo_concurrency.processutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.636 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.637 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.637 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.655 2 DEBUG nova.storage.rbd_utils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] rbd image b11faa30-2b52-45e0-b5f2-dd05b5050493_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.659 2 DEBUG oslo_concurrency.processutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e b11faa30-2b52-45e0-b5f2-dd05b5050493_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:51:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2963390358' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:51:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:51:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2963390358' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.784 2 DEBUG nova.policy [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0dd21dcc2e2e4870bd3a6eb5146bc451', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '69ce475b5af645b7b89607f7ecc196d5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:51:05 np0005480824 nova_compute[260089]: 2025-10-11 03:51:05.949 2 DEBUG oslo_concurrency.processutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e b11faa30-2b52-45e0-b5f2-dd05b5050493_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:06 np0005480824 podman[276857]: 2025-10-11 03:51:06.001393869 +0000 UTC m=+0.055238299 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:51:06 np0005480824 nova_compute[260089]: 2025-10-11 03:51:06.028 2 DEBUG nova.storage.rbd_utils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] resizing rbd image b11faa30-2b52-45e0-b5f2-dd05b5050493_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 23:51:06 np0005480824 nova_compute[260089]: 2025-10-11 03:51:06.127 2 DEBUG nova.objects.instance [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lazy-loading 'migration_context' on Instance uuid b11faa30-2b52-45e0-b5f2-dd05b5050493 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:51:06 np0005480824 nova_compute[260089]: 2025-10-11 03:51:06.144 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:51:06 np0005480824 nova_compute[260089]: 2025-10-11 03:51:06.144 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Ensure instance console log exists: /var/lib/nova/instances/b11faa30-2b52-45e0-b5f2-dd05b5050493/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:51:06 np0005480824 nova_compute[260089]: 2025-10-11 03:51:06.145 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:06 np0005480824 nova_compute[260089]: 2025-10-11 03:51:06.146 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:06 np0005480824 nova_compute[260089]: 2025-10-11 03:51:06.146 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:06 np0005480824 nova_compute[260089]: 2025-10-11 03:51:06.380 2 DEBUG nova.network.neutron [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Successfully created port: 3c111b42-0da1-4752-9b36-2df6a9486510 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:51:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 103 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 746 KiB/s wr, 104 op/s
Oct 10 23:51:07 np0005480824 nova_compute[260089]: 2025-10-11 03:51:07.077 2 DEBUG nova.network.neutron [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Successfully updated port: 3c111b42-0da1-4752-9b36-2df6a9486510 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:51:07 np0005480824 nova_compute[260089]: 2025-10-11 03:51:07.103 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "refresh_cache-b11faa30-2b52-45e0-b5f2-dd05b5050493" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:51:07 np0005480824 nova_compute[260089]: 2025-10-11 03:51:07.104 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquired lock "refresh_cache-b11faa30-2b52-45e0-b5f2-dd05b5050493" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:51:07 np0005480824 nova_compute[260089]: 2025-10-11 03:51:07.104 2 DEBUG nova.network.neutron [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:51:07 np0005480824 nova_compute[260089]: 2025-10-11 03:51:07.246 2 DEBUG nova.compute.manager [req-20f24f01-6551-4c84-a5ee-b3085ab15f54 req-167c6968-6a05-4f98-996a-a889bc5edf8c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Received event network-changed-3c111b42-0da1-4752-9b36-2df6a9486510 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:51:07 np0005480824 nova_compute[260089]: 2025-10-11 03:51:07.247 2 DEBUG nova.compute.manager [req-20f24f01-6551-4c84-a5ee-b3085ab15f54 req-167c6968-6a05-4f98-996a-a889bc5edf8c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Refreshing instance network info cache due to event network-changed-3c111b42-0da1-4752-9b36-2df6a9486510. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:51:07 np0005480824 nova_compute[260089]: 2025-10-11 03:51:07.248 2 DEBUG oslo_concurrency.lockutils [req-20f24f01-6551-4c84-a5ee-b3085ab15f54 req-167c6968-6a05-4f98-996a-a889bc5edf8c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-b11faa30-2b52-45e0-b5f2-dd05b5050493" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:51:07 np0005480824 nova_compute[260089]: 2025-10-11 03:51:07.293 2 DEBUG nova.network.neutron [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:51:07 np0005480824 nova_compute[260089]: 2025-10-11 03:51:07.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.000 2 DEBUG nova.network.neutron [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Updating instance_info_cache with network_info: [{"id": "3c111b42-0da1-4752-9b36-2df6a9486510", "address": "fa:16:3e:84:9a:72", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c111b42-0d", "ovs_interfaceid": "3c111b42-0da1-4752-9b36-2df6a9486510", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.026 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Releasing lock "refresh_cache-b11faa30-2b52-45e0-b5f2-dd05b5050493" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.027 2 DEBUG nova.compute.manager [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Instance network_info: |[{"id": "3c111b42-0da1-4752-9b36-2df6a9486510", "address": "fa:16:3e:84:9a:72", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c111b42-0d", "ovs_interfaceid": "3c111b42-0da1-4752-9b36-2df6a9486510", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.028 2 DEBUG oslo_concurrency.lockutils [req-20f24f01-6551-4c84-a5ee-b3085ab15f54 req-167c6968-6a05-4f98-996a-a889bc5edf8c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-b11faa30-2b52-45e0-b5f2-dd05b5050493" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.028 2 DEBUG nova.network.neutron [req-20f24f01-6551-4c84-a5ee-b3085ab15f54 req-167c6968-6a05-4f98-996a-a889bc5edf8c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Refreshing network info cache for port 3c111b42-0da1-4752-9b36-2df6a9486510 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.034 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Start _get_guest_xml network_info=[{"id": "3c111b42-0da1-4752-9b36-2df6a9486510", "address": "fa:16:3e:84:9a:72", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c111b42-0d", "ovs_interfaceid": "3c111b42-0da1-4752-9b36-2df6a9486510", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.042 2 WARNING nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.048 2 DEBUG nova.virt.libvirt.host [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.048 2 DEBUG nova.virt.libvirt.host [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.056 2 DEBUG nova.virt.libvirt.host [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.057 2 DEBUG nova.virt.libvirt.host [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.058 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.058 2 DEBUG nova.virt.hardware [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.059 2 DEBUG nova.virt.hardware [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.059 2 DEBUG nova.virt.hardware [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.060 2 DEBUG nova.virt.hardware [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.060 2 DEBUG nova.virt.hardware [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.060 2 DEBUG nova.virt.hardware [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.061 2 DEBUG nova.virt.hardware [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.061 2 DEBUG nova.virt.hardware [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.062 2 DEBUG nova.virt.hardware [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.062 2 DEBUG nova.virt.hardware [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.062 2 DEBUG nova.virt.hardware [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.066 2 DEBUG oslo_concurrency.processutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:51:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3840864259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.564 2 DEBUG oslo_concurrency.processutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.595 2 DEBUG nova.storage.rbd_utils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] rbd image b11faa30-2b52-45e0-b5f2-dd05b5050493_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.602 2 DEBUG oslo_concurrency.processutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Oct 10 23:51:08 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Oct 10 23:51:08 np0005480824 nova_compute[260089]: 2025-10-11 03:51:08.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 134 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.1 MiB/s wr, 219 op/s
Oct 10 23:51:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:51:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471168215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.046 2 DEBUG oslo_concurrency.processutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.050 2 DEBUG nova.virt.libvirt.vif [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:51:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-840258588',display_name='tempest-VolumesBackupsTest-instance-840258588',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-840258588',id=8,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtSOTVNRsdtWhisafwSlo870EnSG9pK9SQO/x/iRe7bsz603dHApUhtqM/qxiNKJaYNpJ6pOnwb0vEkRahc2fbOAUYyeOiooHGledRT7nCnxhw4o4XzozntA+vU4Zea9g==',key_name='tempest-keypair-1624285471',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='69ce475b5af645b7b89607f7ecc196d5',ramdisk_id='',reservation_id='r-n2u0milr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1570005285',owner_user_name='tempest-VolumesBackupsTest-1570005285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:51:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0dd21dcc2e2e4870bd3a6eb5146bc451',uuid=b11faa30-2b52-45e0-b5f2-dd05b5050493,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c111b42-0da1-4752-9b36-2df6a9486510", "address": "fa:16:3e:84:9a:72", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c111b42-0d", "ovs_interfaceid": "3c111b42-0da1-4752-9b36-2df6a9486510", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.051 2 DEBUG nova.network.os_vif_util [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Converting VIF {"id": "3c111b42-0da1-4752-9b36-2df6a9486510", "address": "fa:16:3e:84:9a:72", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c111b42-0d", "ovs_interfaceid": "3c111b42-0da1-4752-9b36-2df6a9486510", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.052 2 DEBUG nova.network.os_vif_util [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:9a:72,bridge_name='br-int',has_traffic_filtering=True,id=3c111b42-0da1-4752-9b36-2df6a9486510,network=Network(53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c111b42-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.055 2 DEBUG nova.objects.instance [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid b11faa30-2b52-45e0-b5f2-dd05b5050493 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.060 2 DEBUG nova.network.neutron [req-20f24f01-6551-4c84-a5ee-b3085ab15f54 req-167c6968-6a05-4f98-996a-a889bc5edf8c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Updated VIF entry in instance network info cache for port 3c111b42-0da1-4752-9b36-2df6a9486510. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.061 2 DEBUG nova.network.neutron [req-20f24f01-6551-4c84-a5ee-b3085ab15f54 req-167c6968-6a05-4f98-996a-a889bc5edf8c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Updating instance_info_cache with network_info: [{"id": "3c111b42-0da1-4752-9b36-2df6a9486510", "address": "fa:16:3e:84:9a:72", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c111b42-0d", "ovs_interfaceid": "3c111b42-0da1-4752-9b36-2df6a9486510", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.089 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  <uuid>b11faa30-2b52-45e0-b5f2-dd05b5050493</uuid>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  <name>instance-00000008</name>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <nova:name>tempest-VolumesBackupsTest-instance-840258588</nova:name>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:51:08</nova:creationTime>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:        <nova:user uuid="0dd21dcc2e2e4870bd3a6eb5146bc451">tempest-VolumesBackupsTest-1570005285-project-member</nova:user>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:        <nova:project uuid="69ce475b5af645b7b89607f7ecc196d5">tempest-VolumesBackupsTest-1570005285</nova:project>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:        <nova:port uuid="3c111b42-0da1-4752-9b36-2df6a9486510">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <entry name="serial">b11faa30-2b52-45e0-b5f2-dd05b5050493</entry>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <entry name="uuid">b11faa30-2b52-45e0-b5f2-dd05b5050493</entry>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/b11faa30-2b52-45e0-b5f2-dd05b5050493_disk">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/b11faa30-2b52-45e0-b5f2-dd05b5050493_disk.config">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:84:9a:72"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <target dev="tap3c111b42-0d"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/b11faa30-2b52-45e0-b5f2-dd05b5050493/console.log" append="off"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:51:09 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:51:09 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:51:09 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:51:09 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.090 2 DEBUG nova.compute.manager [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Preparing to wait for external event network-vif-plugged-3c111b42-0da1-4752-9b36-2df6a9486510 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.091 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.091 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.092 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.093 2 DEBUG nova.virt.libvirt.vif [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:51:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-840258588',display_name='tempest-VolumesBackupsTest-instance-840258588',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-840258588',id=8,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtSOTVNRsdtWhisafwSlo870EnSG9pK9SQO/x/iRe7bsz603dHApUhtqM/qxiNKJaYNpJ6pOnwb0vEkRahc2fbOAUYyeOiooHGledRT7nCnxhw4o4XzozntA+vU4Zea9g==',key_name='tempest-keypair-1624285471',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='69ce475b5af645b7b89607f7ecc196d5',ramdisk_id='',reservation_id='r-n2u0milr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1570005285',owner_user_name='tempest-VolumesBackupsTest-1570005285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:51:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0dd21dcc2e2e4870bd3a6eb5146bc451',uuid=b11faa30-2b52-45e0-b5f2-dd05b5050493,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c111b42-0da1-4752-9b36-2df6a9486510", "address": "fa:16:3e:84:9a:72", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c111b42-0d", "ovs_interfaceid": "3c111b42-0da1-4752-9b36-2df6a9486510", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.094 2 DEBUG nova.network.os_vif_util [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Converting VIF {"id": "3c111b42-0da1-4752-9b36-2df6a9486510", "address": "fa:16:3e:84:9a:72", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c111b42-0d", "ovs_interfaceid": "3c111b42-0da1-4752-9b36-2df6a9486510", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.095 2 DEBUG nova.network.os_vif_util [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:9a:72,bridge_name='br-int',has_traffic_filtering=True,id=3c111b42-0da1-4752-9b36-2df6a9486510,network=Network(53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c111b42-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.095 2 DEBUG os_vif [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:9a:72,bridge_name='br-int',has_traffic_filtering=True,id=3c111b42-0da1-4752-9b36-2df6a9486510,network=Network(53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c111b42-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.097 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.098 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.100 2 DEBUG oslo_concurrency.lockutils [req-20f24f01-6551-4c84-a5ee-b3085ab15f54 req-167c6968-6a05-4f98-996a-a889bc5edf8c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-b11faa30-2b52-45e0-b5f2-dd05b5050493" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.104 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c111b42-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c111b42-0d, col_values=(('external_ids', {'iface-id': '3c111b42-0da1-4752-9b36-2df6a9486510', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:9a:72', 'vm-uuid': 'b11faa30-2b52-45e0-b5f2-dd05b5050493'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:09 np0005480824 NetworkManager[44969]: <info>  [1760154669.1128] manager: (tap3c111b42-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.125 2 INFO os_vif [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:9a:72,bridge_name='br-int',has_traffic_filtering=True,id=3c111b42-0da1-4752-9b36-2df6a9486510,network=Network(53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c111b42-0d')#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.198 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.199 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.200 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] No VIF found with MAC fa:16:3e:84:9a:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.201 2 INFO nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Using config drive#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.239 2 DEBUG nova.storage.rbd_utils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] rbd image b11faa30-2b52-45e0-b5f2-dd05b5050493_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.568 2 INFO nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Creating config drive at /var/lib/nova/instances/b11faa30-2b52-45e0-b5f2-dd05b5050493/disk.config#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.574 2 DEBUG oslo_concurrency.processutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b11faa30-2b52-45e0-b5f2-dd05b5050493/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3na0kyn2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.714 2 DEBUG oslo_concurrency.processutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b11faa30-2b52-45e0-b5f2-dd05b5050493/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3na0kyn2" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.744 2 DEBUG nova.storage.rbd_utils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] rbd image b11faa30-2b52-45e0-b5f2-dd05b5050493_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.748 2 DEBUG oslo_concurrency.processutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b11faa30-2b52-45e0-b5f2-dd05b5050493/disk.config b11faa30-2b52-45e0-b5f2-dd05b5050493_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.928 2 DEBUG oslo_concurrency.processutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b11faa30-2b52-45e0-b5f2-dd05b5050493/disk.config b11faa30-2b52-45e0-b5f2-dd05b5050493_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.929 2 INFO nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Deleting local config drive /var/lib/nova/instances/b11faa30-2b52-45e0-b5f2-dd05b5050493/disk.config because it was imported into RBD.#033[00m
Oct 10 23:51:09 np0005480824 kernel: tap3c111b42-0d: entered promiscuous mode
Oct 10 23:51:09 np0005480824 NetworkManager[44969]: <info>  [1760154669.9911] manager: (tap3c111b42-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Oct 10 23:51:09 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:09Z|00088|binding|INFO|Claiming lport 3c111b42-0da1-4752-9b36-2df6a9486510 for this chassis.
Oct 10 23:51:09 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:09Z|00089|binding|INFO|3c111b42-0da1-4752-9b36-2df6a9486510: Claiming fa:16:3e:84:9a:72 10.100.0.9
Oct 10 23:51:09 np0005480824 nova_compute[260089]: 2025-10-11 03:51:09.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.009 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:9a:72 10.100.0.9'], port_security=['fa:16:3e:84:9a:72 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b11faa30-2b52-45e0-b5f2-dd05b5050493', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '69ce475b5af645b7b89607f7ecc196d5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4f5c97de-169f-4c73-b4fd-2b99fea347b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8dd8adb-2052-443b-8fa5-01e320e55d02, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=3c111b42-0da1-4752-9b36-2df6a9486510) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.010 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 3c111b42-0da1-4752-9b36-2df6a9486510 in datapath 53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0 bound to our chassis#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.011 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0#033[00m
Oct 10 23:51:10 np0005480824 systemd-machined[215071]: New machine qemu-8-instance-00000008.
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.033 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[85ea9651-5af0-4689-84a6-b3ac6db497f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.034 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap53e5ffdf-11 in ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.037 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap53e5ffdf-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.038 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[57fb61fa-7631-4cd0-ab1a-3d02583914b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.039 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[944ac3cc-1879-41d8-88ce-b3cd24d505bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.067 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[37824e16-1b51-4514-8bb2-28f98fe62c0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 systemd-udevd[277085]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.092 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[16e4780e-44ee-49c0-a43f-590b7e6049b8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 NetworkManager[44969]: <info>  [1760154670.1109] device (tap3c111b42-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:51:10 np0005480824 NetworkManager[44969]: <info>  [1760154670.1118] device (tap3c111b42-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:51:10 np0005480824 nova_compute[260089]: 2025-10-11 03:51:10.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:10 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:10Z|00090|binding|INFO|Setting lport 3c111b42-0da1-4752-9b36-2df6a9486510 ovn-installed in OVS
Oct 10 23:51:10 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:10Z|00091|binding|INFO|Setting lport 3c111b42-0da1-4752-9b36-2df6a9486510 up in Southbound
Oct 10 23:51:10 np0005480824 nova_compute[260089]: 2025-10-11 03:51:10.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.135 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[6cefd373-ee59-40ba-9e47-6e81628779f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.143 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[db810828-3417-46fb-8c7f-1b7d422b57be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 NetworkManager[44969]: <info>  [1760154670.1456] manager: (tap53e5ffdf-10): new Veth device (/org/freedesktop/NetworkManager/Devices/59)
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.196 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[76ca4943-c1a2-4262-a72f-c31b263bd6e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.200 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1b2bfd-1439-4c53-a3d8-62aa4c528df8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 NetworkManager[44969]: <info>  [1760154670.2410] device (tap53e5ffdf-10): carrier: link connected
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.252 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[88748979-e2a6-4eeb-bc14-628a9b3ffad1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.282 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[875ff17e-afa5-43c9-bf22-f9d906e2bdb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53e5ffdf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:e0:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 413790, 'reachable_time': 38062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277115, 'error': None, 'target': 'ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.315 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[a8877304-63f9-42e5-9cf9-348b11d7685e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:e043'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 413790, 'tstamp': 413790}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277116, 'error': None, 'target': 'ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.345 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[33ae455b-de13-49ac-a879-15685b3f6941]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53e5ffdf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:e0:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 413790, 'reachable_time': 38062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277117, 'error': None, 'target': 'ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.390 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[84ea9504-54f9-4191-9b22-b1d60eaaf3b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.484 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e71b23e8-af46-4e1c-aefc-d2a37be32c7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.487 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53e5ffdf-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.487 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.488 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53e5ffdf-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:10 np0005480824 NetworkManager[44969]: <info>  [1760154670.4921] manager: (tap53e5ffdf-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct 10 23:51:10 np0005480824 kernel: tap53e5ffdf-10: entered promiscuous mode
Oct 10 23:51:10 np0005480824 nova_compute[260089]: 2025-10-11 03:51:10.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.497 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.498 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.498 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.499 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap53e5ffdf-10, col_values=(('external_ids', {'iface-id': 'e3d8cf16-8a21-4a19-8fd9-2779fca0c5ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:10 np0005480824 nova_compute[260089]: 2025-10-11 03:51:10.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:10 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:10Z|00092|binding|INFO|Releasing lport e3d8cf16-8a21-4a19-8fd9-2779fca0c5ae from this chassis (sb_readonly=0)
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.504 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.505 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[14efccf4-ed82-4c58-9ee2-9a44eb5f84e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.507 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0.pid.haproxy
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:51:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:10.511 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'env', 'PROCESS_TAG=haproxy-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:51:10 np0005480824 nova_compute[260089]: 2025-10-11 03:51:10.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 134 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 4.7 MiB/s wr, 112 op/s
Oct 10 23:51:10 np0005480824 podman[277149]: 2025-10-11 03:51:10.995526481 +0000 UTC m=+0.064849143 container create d6130f0e932e3e796c70d8885498316dcadfa1dcf2de952907715ce71819451d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 10 23:51:11 np0005480824 systemd[1]: Started libpod-conmon-d6130f0e932e3e796c70d8885498316dcadfa1dcf2de952907715ce71819451d.scope.
Oct 10 23:51:11 np0005480824 podman[277149]: 2025-10-11 03:51:10.960912639 +0000 UTC m=+0.030235301 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:51:11 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:51:11 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7121b1bb49edad82ae55f53a91e24a72d90dfdc61139b1c966efd24ef6a7edf1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:11 np0005480824 podman[277149]: 2025-10-11 03:51:11.085562886 +0000 UTC m=+0.154885568 container init d6130f0e932e3e796c70d8885498316dcadfa1dcf2de952907715ce71819451d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Oct 10 23:51:11 np0005480824 podman[277149]: 2025-10-11 03:51:11.092087898 +0000 UTC m=+0.161410560 container start d6130f0e932e3e796c70d8885498316dcadfa1dcf2de952907715ce71819451d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 10 23:51:11 np0005480824 neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0[277206]: [NOTICE]   (277211) : New worker (277213) forked
Oct 10 23:51:11 np0005480824 neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0[277206]: [NOTICE]   (277211) : Loading success.
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.560 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154671.5601275, b11faa30-2b52-45e0-b5f2-dd05b5050493 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.561 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] VM Started (Lifecycle Event)#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.691 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.697 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154671.5602684, b11faa30-2b52-45e0-b5f2-dd05b5050493 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.697 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:51:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Oct 10 23:51:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.719 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:51:11 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.726 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.750 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.859 2 DEBUG nova.compute.manager [req-c6226eb1-66ab-477f-9dbb-78f4f895ea53 req-60378cfa-2943-4e2f-adaf-da0aa4a77e26 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Received event network-vif-plugged-3c111b42-0da1-4752-9b36-2df6a9486510 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.860 2 DEBUG oslo_concurrency.lockutils [req-c6226eb1-66ab-477f-9dbb-78f4f895ea53 req-60378cfa-2943-4e2f-adaf-da0aa4a77e26 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.860 2 DEBUG oslo_concurrency.lockutils [req-c6226eb1-66ab-477f-9dbb-78f4f895ea53 req-60378cfa-2943-4e2f-adaf-da0aa4a77e26 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.861 2 DEBUG oslo_concurrency.lockutils [req-c6226eb1-66ab-477f-9dbb-78f4f895ea53 req-60378cfa-2943-4e2f-adaf-da0aa4a77e26 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.861 2 DEBUG nova.compute.manager [req-c6226eb1-66ab-477f-9dbb-78f4f895ea53 req-60378cfa-2943-4e2f-adaf-da0aa4a77e26 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Processing event network-vif-plugged-3c111b42-0da1-4752-9b36-2df6a9486510 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.862 2 DEBUG nova.compute.manager [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.867 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154671.8668242, b11faa30-2b52-45e0-b5f2-dd05b5050493 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.867 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.871 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.878 2 INFO nova.virt.libvirt.driver [-] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Instance spawned successfully.#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.879 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.905 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.915 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.922 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.923 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.924 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.925 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.926 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.927 2 DEBUG nova.virt.libvirt.driver [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.937 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.982 2 INFO nova.compute.manager [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Took 6.52 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:51:11 np0005480824 nova_compute[260089]: 2025-10-11 03:51:11.983 2 DEBUG nova.compute.manager [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:51:12 np0005480824 nova_compute[260089]: 2025-10-11 03:51:12.073 2 INFO nova.compute.manager [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Took 7.52 seconds to build instance.#033[00m
Oct 10 23:51:12 np0005480824 nova_compute[260089]: 2025-10-11 03:51:12.093 2 DEBUG oslo_concurrency.lockutils [None req-5a1d1630-faf4-47ff-a77f-96a65ce8f0f2 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:51:12 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3201950149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:51:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:51:12 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3201950149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:51:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 4.7 MiB/s wr, 156 op/s
Oct 10 23:51:13 np0005480824 nova_compute[260089]: 2025-10-11 03:51:13.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:14 np0005480824 nova_compute[260089]: 2025-10-11 03:51:14.110 2 DEBUG nova.compute.manager [req-65e813f5-21db-4c1f-929d-6f8838690e0b req-10cbad1c-f4fb-49a5-bc10-c885fe8b929e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Received event network-vif-plugged-3c111b42-0da1-4752-9b36-2df6a9486510 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:51:14 np0005480824 nova_compute[260089]: 2025-10-11 03:51:14.111 2 DEBUG oslo_concurrency.lockutils [req-65e813f5-21db-4c1f-929d-6f8838690e0b req-10cbad1c-f4fb-49a5-bc10-c885fe8b929e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:14 np0005480824 nova_compute[260089]: 2025-10-11 03:51:14.111 2 DEBUG oslo_concurrency.lockutils [req-65e813f5-21db-4c1f-929d-6f8838690e0b req-10cbad1c-f4fb-49a5-bc10-c885fe8b929e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:14 np0005480824 nova_compute[260089]: 2025-10-11 03:51:14.111 2 DEBUG oslo_concurrency.lockutils [req-65e813f5-21db-4c1f-929d-6f8838690e0b req-10cbad1c-f4fb-49a5-bc10-c885fe8b929e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:14 np0005480824 nova_compute[260089]: 2025-10-11 03:51:14.112 2 DEBUG nova.compute.manager [req-65e813f5-21db-4c1f-929d-6f8838690e0b req-10cbad1c-f4fb-49a5-bc10-c885fe8b929e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] No waiting events found dispatching network-vif-plugged-3c111b42-0da1-4752-9b36-2df6a9486510 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:51:14 np0005480824 nova_compute[260089]: 2025-10-11 03:51:14.112 2 WARNING nova.compute.manager [req-65e813f5-21db-4c1f-929d-6f8838690e0b req-10cbad1c-f4fb-49a5-bc10-c885fe8b929e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Received unexpected event network-vif-plugged-3c111b42-0da1-4752-9b36-2df6a9486510 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:51:14 np0005480824 nova_compute[260089]: 2025-10-11 03:51:14.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:14 np0005480824 nova_compute[260089]: 2025-10-11 03:51:14.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:14 np0005480824 NetworkManager[44969]: <info>  [1760154674.2300] manager: (patch-br-int-to-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct 10 23:51:14 np0005480824 NetworkManager[44969]: <info>  [1760154674.2322] manager: (patch-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Oct 10 23:51:14 np0005480824 nova_compute[260089]: 2025-10-11 03:51:14.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:14 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:14Z|00093|binding|INFO|Releasing lport e3d8cf16-8a21-4a19-8fd9-2779fca0c5ae from this chassis (sb_readonly=0)
Oct 10 23:51:14 np0005480824 nova_compute[260089]: 2025-10-11 03:51:14.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:51:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Oct 10 23:51:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Oct 10 23:51:14 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Oct 10 23:51:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 31 KiB/s wr, 58 op/s
Oct 10 23:51:16 np0005480824 nova_compute[260089]: 2025-10-11 03:51:16.321 2 DEBUG nova.compute.manager [req-577a3b02-bce0-4fe6-9f32-9fa872971f9c req-dc308a1f-82f2-4ed8-8aca-1a9339230372 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Received event network-changed-3c111b42-0da1-4752-9b36-2df6a9486510 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:51:16 np0005480824 nova_compute[260089]: 2025-10-11 03:51:16.321 2 DEBUG nova.compute.manager [req-577a3b02-bce0-4fe6-9f32-9fa872971f9c req-dc308a1f-82f2-4ed8-8aca-1a9339230372 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Refreshing instance network info cache due to event network-changed-3c111b42-0da1-4752-9b36-2df6a9486510. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:51:16 np0005480824 nova_compute[260089]: 2025-10-11 03:51:16.322 2 DEBUG oslo_concurrency.lockutils [req-577a3b02-bce0-4fe6-9f32-9fa872971f9c req-dc308a1f-82f2-4ed8-8aca-1a9339230372 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-b11faa30-2b52-45e0-b5f2-dd05b5050493" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:51:16 np0005480824 nova_compute[260089]: 2025-10-11 03:51:16.322 2 DEBUG oslo_concurrency.lockutils [req-577a3b02-bce0-4fe6-9f32-9fa872971f9c req-dc308a1f-82f2-4ed8-8aca-1a9339230372 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-b11faa30-2b52-45e0-b5f2-dd05b5050493" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:51:16 np0005480824 nova_compute[260089]: 2025-10-11 03:51:16.322 2 DEBUG nova.network.neutron [req-577a3b02-bce0-4fe6-9f32-9fa872971f9c req-dc308a1f-82f2-4ed8-8aca-1a9339230372 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Refreshing network info cache for port 3c111b42-0da1-4752-9b36-2df6a9486510 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:51:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 23 KiB/s wr, 43 op/s
Oct 10 23:51:17 np0005480824 nova_compute[260089]: 2025-10-11 03:51:17.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:17 np0005480824 nova_compute[260089]: 2025-10-11 03:51:17.466 2 DEBUG nova.network.neutron [req-577a3b02-bce0-4fe6-9f32-9fa872971f9c req-dc308a1f-82f2-4ed8-8aca-1a9339230372 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Updated VIF entry in instance network info cache for port 3c111b42-0da1-4752-9b36-2df6a9486510. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:51:17 np0005480824 nova_compute[260089]: 2025-10-11 03:51:17.469 2 DEBUG nova.network.neutron [req-577a3b02-bce0-4fe6-9f32-9fa872971f9c req-dc308a1f-82f2-4ed8-8aca-1a9339230372 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Updating instance_info_cache with network_info: [{"id": "3c111b42-0da1-4752-9b36-2df6a9486510", "address": "fa:16:3e:84:9a:72", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c111b42-0d", "ovs_interfaceid": "3c111b42-0da1-4752-9b36-2df6a9486510", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:51:17 np0005480824 nova_compute[260089]: 2025-10-11 03:51:17.505 2 DEBUG oslo_concurrency.lockutils [req-577a3b02-bce0-4fe6-9f32-9fa872971f9c req-dc308a1f-82f2-4ed8-8aca-1a9339230372 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-b11faa30-2b52-45e0-b5f2-dd05b5050493" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:51:18 np0005480824 nova_compute[260089]: 2025-10-11 03:51:18.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 24 KiB/s wr, 161 op/s
Oct 10 23:51:19 np0005480824 nova_compute[260089]: 2025-10-11 03:51:19.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:51:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Oct 10 23:51:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Oct 10 23:51:19 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.105 2 DEBUG oslo_concurrency.lockutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Acquiring lock "c0950410-c7b0-4cc6-9994-7a0380d77b7e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.106 2 DEBUG oslo_concurrency.lockutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lock "c0950410-c7b0-4cc6-9994-7a0380d77b7e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.127 2 DEBUG nova.compute.manager [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.215 2 DEBUG oslo_concurrency.lockutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.216 2 DEBUG oslo_concurrency.lockutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.227 2 DEBUG nova.virt.hardware [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.228 2 INFO nova.compute.claims [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.366 2 DEBUG oslo_concurrency.processutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 767 B/s wr, 117 op/s
Oct 10 23:51:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:51:20 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2408213983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.842 2 DEBUG oslo_concurrency.processutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.855 2 DEBUG nova.compute.provider_tree [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.878 2 DEBUG nova.scheduler.client.report [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.904 2 DEBUG oslo_concurrency.lockutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.906 2 DEBUG nova.compute.manager [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.957 2 DEBUG nova.compute.manager [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.958 2 DEBUG nova.network.neutron [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:51:20 np0005480824 nova_compute[260089]: 2025-10-11 03:51:20.986 2 INFO nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.007 2 DEBUG nova.compute.manager [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.104 2 DEBUG nova.compute.manager [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.106 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.107 2 INFO nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Creating image(s)#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.143 2 DEBUG nova.storage.rbd_utils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] rbd image c0950410-c7b0-4cc6-9994-7a0380d77b7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.185 2 DEBUG nova.storage.rbd_utils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] rbd image c0950410-c7b0-4cc6-9994-7a0380d77b7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.226 2 DEBUG nova.storage.rbd_utils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] rbd image c0950410-c7b0-4cc6-9994-7a0380d77b7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.230 2 DEBUG oslo_concurrency.processutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.319 2 DEBUG oslo_concurrency.processutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.321 2 DEBUG oslo_concurrency.lockutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.322 2 DEBUG oslo_concurrency.lockutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.322 2 DEBUG oslo_concurrency.lockutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.356 2 DEBUG nova.storage.rbd_utils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] rbd image c0950410-c7b0-4cc6-9994-7a0380d77b7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.361 2 DEBUG oslo_concurrency.processutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e c0950410-c7b0-4cc6-9994-7a0380d77b7e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.429 2 DEBUG nova.network.neutron [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.430 2 DEBUG nova.compute.manager [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.651 2 DEBUG oslo_concurrency.processutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e c0950410-c7b0-4cc6-9994-7a0380d77b7e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.748 2 DEBUG nova.storage.rbd_utils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] resizing rbd image c0950410-c7b0-4cc6-9994-7a0380d77b7e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.867 2 DEBUG nova.objects.instance [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lazy-loading 'migration_context' on Instance uuid c0950410-c7b0-4cc6-9994-7a0380d77b7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.884 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.885 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Ensure instance console log exists: /var/lib/nova/instances/c0950410-c7b0-4cc6-9994-7a0380d77b7e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.885 2 DEBUG oslo_concurrency.lockutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.885 2 DEBUG oslo_concurrency.lockutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.886 2 DEBUG oslo_concurrency.lockutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.887 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.892 2 WARNING nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.897 2 DEBUG nova.virt.libvirt.host [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.898 2 DEBUG nova.virt.libvirt.host [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.904 2 DEBUG nova.virt.libvirt.host [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.905 2 DEBUG nova.virt.libvirt.host [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.905 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.905 2 DEBUG nova.virt.hardware [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.906 2 DEBUG nova.virt.hardware [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.906 2 DEBUG nova.virt.hardware [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.906 2 DEBUG nova.virt.hardware [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.906 2 DEBUG nova.virt.hardware [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.907 2 DEBUG nova.virt.hardware [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.907 2 DEBUG nova.virt.hardware [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.907 2 DEBUG nova.virt.hardware [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.907 2 DEBUG nova.virt.hardware [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.907 2 DEBUG nova.virt.hardware [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.908 2 DEBUG nova.virt.hardware [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:51:21 np0005480824 nova_compute[260089]: 2025-10-11 03:51:21.910 2 DEBUG oslo_concurrency.processutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:51:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1956893591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:51:22 np0005480824 nova_compute[260089]: 2025-10-11 03:51:22.339 2 DEBUG oslo_concurrency.processutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:22 np0005480824 nova_compute[260089]: 2025-10-11 03:51:22.372 2 DEBUG nova.storage.rbd_utils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] rbd image c0950410-c7b0-4cc6-9994-7a0380d77b7e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:22 np0005480824 nova_compute[260089]: 2025-10-11 03:51:22.377 2 DEBUG oslo_concurrency.processutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 180 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 142 op/s
Oct 10 23:51:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:51:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/22379183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:51:22 np0005480824 nova_compute[260089]: 2025-10-11 03:51:22.878 2 DEBUG oslo_concurrency.processutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:22 np0005480824 nova_compute[260089]: 2025-10-11 03:51:22.882 2 DEBUG nova.objects.instance [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid c0950410-c7b0-4cc6-9994-7a0380d77b7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:51:22 np0005480824 nova_compute[260089]: 2025-10-11 03:51:22.900 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  <uuid>c0950410-c7b0-4cc6-9994-7a0380d77b7e</uuid>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  <name>instance-00000009</name>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <nova:name>tempest-VolumesNegativeTest-instance-320259494</nova:name>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:51:21</nova:creationTime>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:        <nova:user uuid="70a18b1ecfd548ea86473882331f5b33">tempest-VolumesNegativeTest-1700306362-project-member</nova:user>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:        <nova:project uuid="cb9ae9a8620b4a37884fe3681c39d1f0">tempest-VolumesNegativeTest-1700306362</nova:project>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <nova:ports/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <entry name="serial">c0950410-c7b0-4cc6-9994-7a0380d77b7e</entry>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <entry name="uuid">c0950410-c7b0-4cc6-9994-7a0380d77b7e</entry>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/c0950410-c7b0-4cc6-9994-7a0380d77b7e_disk">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/c0950410-c7b0-4cc6-9994-7a0380d77b7e_disk.config">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/c0950410-c7b0-4cc6-9994-7a0380d77b7e/console.log" append="off"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:51:22 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:51:22 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:51:22 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:51:22 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:51:22 np0005480824 nova_compute[260089]: 2025-10-11 03:51:22.972 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:51:22 np0005480824 nova_compute[260089]: 2025-10-11 03:51:22.973 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:51:22 np0005480824 nova_compute[260089]: 2025-10-11 03:51:22.974 2 INFO nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Using config drive#033[00m
Oct 10 23:51:23 np0005480824 nova_compute[260089]: 2025-10-11 03:51:23.013 2 DEBUG nova.storage.rbd_utils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] rbd image c0950410-c7b0-4cc6-9994-7a0380d77b7e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:23 np0005480824 nova_compute[260089]: 2025-10-11 03:51:23.302 2 INFO nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Creating config drive at /var/lib/nova/instances/c0950410-c7b0-4cc6-9994-7a0380d77b7e/disk.config#033[00m
Oct 10 23:51:23 np0005480824 nova_compute[260089]: 2025-10-11 03:51:23.312 2 DEBUG oslo_concurrency.processutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0950410-c7b0-4cc6-9994-7a0380d77b7e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprx_oixo0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:23 np0005480824 nova_compute[260089]: 2025-10-11 03:51:23.461 2 DEBUG oslo_concurrency.processutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0950410-c7b0-4cc6-9994-7a0380d77b7e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprx_oixo0" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:23 np0005480824 nova_compute[260089]: 2025-10-11 03:51:23.500 2 DEBUG nova.storage.rbd_utils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] rbd image c0950410-c7b0-4cc6-9994-7a0380d77b7e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:23 np0005480824 nova_compute[260089]: 2025-10-11 03:51:23.504 2 DEBUG oslo_concurrency.processutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0950410-c7b0-4cc6-9994-7a0380d77b7e/disk.config c0950410-c7b0-4cc6-9994-7a0380d77b7e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:23 np0005480824 nova_compute[260089]: 2025-10-11 03:51:23.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:23 np0005480824 nova_compute[260089]: 2025-10-11 03:51:23.702 2 DEBUG oslo_concurrency.processutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0950410-c7b0-4cc6-9994-7a0380d77b7e/disk.config c0950410-c7b0-4cc6-9994-7a0380d77b7e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:23 np0005480824 nova_compute[260089]: 2025-10-11 03:51:23.703 2 INFO nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Deleting local config drive /var/lib/nova/instances/c0950410-c7b0-4cc6-9994-7a0380d77b7e/disk.config because it was imported into RBD.#033[00m
Oct 10 23:51:23 np0005480824 systemd-machined[215071]: New machine qemu-9-instance-00000009.
Oct 10 23:51:23 np0005480824 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Oct 10 23:51:23 np0005480824 podman[277542]: 2025-10-11 03:51:23.866368107 +0000 UTC m=+0.087366822 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 23:51:23 np0005480824 podman[277541]: 2025-10-11 03:51:23.900571859 +0000 UTC m=+0.118285167 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 10 23:51:24 np0005480824 nova_compute[260089]: 2025-10-11 03:51:24.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:24 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:24Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:9a:72 10.100.0.9
Oct 10 23:51:24 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:24Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:9a:72 10.100.0.9
Oct 10 23:51:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:51:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2399436272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:51:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:51:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2399436272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:51:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:51:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 180 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.008 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154685.0079055, c0950410-c7b0-4cc6-9994-7a0380d77b7e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.009 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.017 2 DEBUG nova.compute.manager [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.019 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.025 2 INFO nova.virt.libvirt.driver [-] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Instance spawned successfully.#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.026 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.036 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.041 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.055 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.056 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.057 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.058 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.059 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.060 2 DEBUG nova.virt.libvirt.driver [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.067 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.068 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154685.0143063, c0950410-c7b0-4cc6-9994-7a0380d77b7e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.068 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] VM Started (Lifecycle Event)#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.100 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.105 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.151 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.162 2 INFO nova.compute.manager [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Took 4.06 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.162 2 DEBUG nova.compute.manager [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.239 2 INFO nova.compute.manager [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Took 5.05 seconds to build instance.#033[00m
Oct 10 23:51:25 np0005480824 nova_compute[260089]: 2025-10-11 03:51:25.255 2 DEBUG oslo_concurrency.lockutils [None req-adc4e2a9-4f6e-4171-8f28-5ab9a03927b3 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lock "c0950410-c7b0-4cc6-9994-7a0380d77b7e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:26 np0005480824 nova_compute[260089]: 2025-10-11 03:51:26.577 2 DEBUG oslo_concurrency.lockutils [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Acquiring lock "c0950410-c7b0-4cc6-9994-7a0380d77b7e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:26 np0005480824 nova_compute[260089]: 2025-10-11 03:51:26.578 2 DEBUG oslo_concurrency.lockutils [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lock "c0950410-c7b0-4cc6-9994-7a0380d77b7e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:26 np0005480824 nova_compute[260089]: 2025-10-11 03:51:26.578 2 DEBUG oslo_concurrency.lockutils [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Acquiring lock "c0950410-c7b0-4cc6-9994-7a0380d77b7e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:26 np0005480824 nova_compute[260089]: 2025-10-11 03:51:26.579 2 DEBUG oslo_concurrency.lockutils [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lock "c0950410-c7b0-4cc6-9994-7a0380d77b7e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:26 np0005480824 nova_compute[260089]: 2025-10-11 03:51:26.579 2 DEBUG oslo_concurrency.lockutils [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lock "c0950410-c7b0-4cc6-9994-7a0380d77b7e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:26 np0005480824 nova_compute[260089]: 2025-10-11 03:51:26.581 2 INFO nova.compute.manager [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Terminating instance#033[00m
Oct 10 23:51:26 np0005480824 nova_compute[260089]: 2025-10-11 03:51:26.583 2 DEBUG oslo_concurrency.lockutils [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Acquiring lock "refresh_cache-c0950410-c7b0-4cc6-9994-7a0380d77b7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:51:26 np0005480824 nova_compute[260089]: 2025-10-11 03:51:26.584 2 DEBUG oslo_concurrency.lockutils [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Acquired lock "refresh_cache-c0950410-c7b0-4cc6-9994-7a0380d77b7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:51:26 np0005480824 nova_compute[260089]: 2025-10-11 03:51:26.584 2 DEBUG nova.network.neutron [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:51:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 180 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Oct 10 23:51:26 np0005480824 nova_compute[260089]: 2025-10-11 03:51:26.913 2 DEBUG nova.network.neutron [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:51:27 np0005480824 nova_compute[260089]: 2025-10-11 03:51:27.213 2 DEBUG nova.network.neutron [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:51:27 np0005480824 nova_compute[260089]: 2025-10-11 03:51:27.233 2 DEBUG oslo_concurrency.lockutils [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Releasing lock "refresh_cache-c0950410-c7b0-4cc6-9994-7a0380d77b7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:51:27 np0005480824 nova_compute[260089]: 2025-10-11 03:51:27.234 2 DEBUG nova.compute.manager [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:51:27 np0005480824 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 10 23:51:27 np0005480824 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 3.457s CPU time.
Oct 10 23:51:27 np0005480824 systemd-machined[215071]: Machine qemu-9-instance-00000009 terminated.
Oct 10 23:51:27 np0005480824 nova_compute[260089]: 2025-10-11 03:51:27.463 2 INFO nova.virt.libvirt.driver [-] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Instance destroyed successfully.#033[00m
Oct 10 23:51:27 np0005480824 nova_compute[260089]: 2025-10-11 03:51:27.465 2 DEBUG nova.objects.instance [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lazy-loading 'resources' on Instance uuid c0950410-c7b0-4cc6-9994-7a0380d77b7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:51:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:51:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:51:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:51:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:51:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:51:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:51:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:51:27
Oct 10 23:51:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:51:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:51:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['images', 'volumes', '.mgr', 'vms', 'default.rgw.log', 'backups', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Oct 10 23:51:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:51:27 np0005480824 nova_compute[260089]: 2025-10-11 03:51:27.953 2 INFO nova.virt.libvirt.driver [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Deleting instance files /var/lib/nova/instances/c0950410-c7b0-4cc6-9994-7a0380d77b7e_del#033[00m
Oct 10 23:51:27 np0005480824 nova_compute[260089]: 2025-10-11 03:51:27.954 2 INFO nova.virt.libvirt.driver [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Deletion of /var/lib/nova/instances/c0950410-c7b0-4cc6-9994-7a0380d77b7e_del complete#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.014 2 INFO nova.compute.manager [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.015 2 DEBUG oslo.service.loopingcall [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.016 2 DEBUG nova.compute.manager [-] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.016 2 DEBUG nova.network.neutron [-] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:51:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.144 2 DEBUG nova.network.neutron [-] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:51:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:51:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:51:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:51:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:51:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:51:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:51:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:51:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:51:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.160 2 DEBUG nova.network.neutron [-] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.174 2 INFO nova.compute.manager [-] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Took 0.16 seconds to deallocate network for instance.#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.212 2 DEBUG oslo_concurrency.lockutils [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.213 2 DEBUG oslo_concurrency.lockutils [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.284 2 DEBUG oslo_concurrency.processutils [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.315 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.486 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "d22b35e9-badc-40d1-952e-60cdfd60decb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.486 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.518 2 DEBUG nova.compute.manager [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.583 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 213 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 200 op/s
Oct 10 23:51:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:51:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/600260987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.764 2 DEBUG oslo_concurrency.processutils [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.771 2 DEBUG nova.compute.provider_tree [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.787 2 DEBUG nova.scheduler.client.report [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.815 2 DEBUG oslo_concurrency.lockutils [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.819 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.827 2 DEBUG nova.virt.hardware [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.827 2 INFO nova.compute.claims [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.843 2 INFO nova.scheduler.client.report [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Deleted allocations for instance c0950410-c7b0-4cc6-9994-7a0380d77b7e#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.911 2 DEBUG oslo_concurrency.lockutils [None req-763d5663-8780-4f40-b0b0-4a7e46405d12 70a18b1ecfd548ea86473882331f5b33 cb9ae9a8620b4a37884fe3681c39d1f0 - - default default] Lock "c0950410-c7b0-4cc6-9994-7a0380d77b7e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:28 np0005480824 nova_compute[260089]: 2025-10-11 03:51:28.954 2 DEBUG oslo_concurrency.processutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:51:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:51:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4253095825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.417 2 DEBUG oslo_concurrency.processutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.425 2 DEBUG nova.compute.provider_tree [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.445 2 DEBUG nova.scheduler.client.report [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.471 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.472 2 DEBUG nova.compute.manager [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.525 2 DEBUG nova.compute.manager [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.525 2 DEBUG nova.network.neutron [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.551 2 INFO nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:51:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.569 2 DEBUG nova.compute.manager [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.681 2 DEBUG nova.policy [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd6596329d9c842b78638fdbcf50b8ec8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '944395b4a11c4a9182fda518dc7bd2d8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.688 2 DEBUG nova.compute.manager [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.690 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.691 2 INFO nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Creating image(s)#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.730 2 DEBUG nova.storage.rbd_utils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] rbd image d22b35e9-badc-40d1-952e-60cdfd60decb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.771 2 DEBUG nova.storage.rbd_utils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] rbd image d22b35e9-badc-40d1-952e-60cdfd60decb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Oct 10 23:51:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Oct 10 23:51:29 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.842 2 DEBUG nova.storage.rbd_utils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] rbd image d22b35e9-badc-40d1-952e-60cdfd60decb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.853 2 DEBUG oslo_concurrency.processutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.951 2 DEBUG oslo_concurrency.processutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.953 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.953 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.954 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.982 2 DEBUG nova.storage.rbd_utils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] rbd image d22b35e9-badc-40d1-952e-60cdfd60decb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:29 np0005480824 nova_compute[260089]: 2025-10-11 03:51:29.987 2 DEBUG oslo_concurrency.processutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e d22b35e9-badc-40d1-952e-60cdfd60decb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.216 2 DEBUG oslo_concurrency.lockutils [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.217 2 DEBUG oslo_concurrency.lockutils [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.249 2 DEBUG nova.objects.instance [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lazy-loading 'flavor' on Instance uuid b11faa30-2b52-45e0-b5f2-dd05b5050493 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.272 2 INFO nova.virt.libvirt.driver [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Ignoring supplied device name: /dev/vdb#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.285 2 DEBUG oslo_concurrency.lockutils [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.291 2 DEBUG oslo_concurrency.processutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e d22b35e9-badc-40d1-952e-60cdfd60decb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.327 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.328 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.328 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.369 2 DEBUG nova.storage.rbd_utils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] resizing rbd image d22b35e9-badc-40d1-952e-60cdfd60decb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.483 2 DEBUG nova.objects.instance [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lazy-loading 'migration_context' on Instance uuid d22b35e9-badc-40d1-952e-60cdfd60decb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.491 2 DEBUG nova.network.neutron [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Successfully created port: a6d0ac82-b500-4962-8bfd-d36ef3ba2948 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.510 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.511 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Ensure instance console log exists: /var/lib/nova/instances/d22b35e9-badc-40d1-952e-60cdfd60decb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.511 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.511 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.512 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.513 2 DEBUG oslo_concurrency.lockutils [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.513 2 DEBUG oslo_concurrency.lockutils [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.513 2 INFO nova.compute.manager [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Attaching volume b5cac214-f769-4b37-ac35-25810f98302d to /dev/vdb#033[00m
Oct 10 23:51:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 213 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 200 op/s
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.684 2 DEBUG os_brick.utils [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.686 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.708 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.709 676 DEBUG oslo.privsep.daemon [-] privsep: reply[e864e164-8aa2-4fbd-a0b4-249bd7c3bc34]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.711 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.723 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.724 676 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d901f1-2e92-4c40-ab78-e46b8fac8fa5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.726 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.740 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.740 676 DEBUG oslo.privsep.daemon [-] privsep: reply[ca918fab-7e54-486d-8e28-4eed77be3fb3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.749 676 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc0ad69-e311-44c2-a611-aac844f0a773]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.750 2 DEBUG oslo_concurrency.processutils [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.787 2 DEBUG oslo_concurrency.processutils [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.793 2 DEBUG os_brick.initiator.connectors.lightos [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.794 2 DEBUG os_brick.initiator.connectors.lightos [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.794 2 DEBUG os_brick.initiator.connectors.lightos [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.795 2 DEBUG os_brick.utils [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] <== get_connector_properties: return (109ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:51:30 np0005480824 nova_compute[260089]: 2025-10-11 03:51:30.795 2 DEBUG nova.virt.block_device [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Updating existing volume attachment record: 4b0b2ab0-e503-4522-9335-beb858bcce9f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:51:31 np0005480824 podman[277872]: 2025-10-11 03:51:31.091496611 +0000 UTC m=+0.139999108 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.229 2 DEBUG nova.network.neutron [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Successfully updated port: a6d0ac82-b500-4962-8bfd-d36ef3ba2948 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.244 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.244 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquired lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.245 2 DEBUG nova.network.neutron [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.308 2 DEBUG nova.compute.manager [req-7adf0d88-5f09-4d30-8be2-437d9a3adc24 req-df00d80a-dc40-4a40-8813-0efddb6b8e79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Received event network-changed-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.309 2 DEBUG nova.compute.manager [req-7adf0d88-5f09-4d30-8be2-437d9a3adc24 req-df00d80a-dc40-4a40-8813-0efddb6b8e79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Refreshing instance network info cache due to event network-changed-a6d0ac82-b500-4962-8bfd-d36ef3ba2948. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.309 2 DEBUG oslo_concurrency.lockutils [req-7adf0d88-5f09-4d30-8be2-437d9a3adc24 req-df00d80a-dc40-4a40-8813-0efddb6b8e79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.400 2 DEBUG nova.network.neutron [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:51:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:51:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1855936300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.715 2 DEBUG nova.objects.instance [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lazy-loading 'flavor' on Instance uuid b11faa30-2b52-45e0-b5f2-dd05b5050493 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.741 2 DEBUG nova.virt.libvirt.driver [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Attempting to attach volume b5cac214-f769-4b37-ac35-25810f98302d with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.745 2 DEBUG nova.virt.libvirt.guest [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] attach device xml: <disk type="network" device="disk">
Oct 10 23:51:31 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:51:31 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-b5cac214-f769-4b37-ac35-25810f98302d">
Oct 10 23:51:31 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:51:31 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:51:31 np0005480824 nova_compute[260089]:  <auth username="openstack">
Oct 10 23:51:31 np0005480824 nova_compute[260089]:    <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:51:31 np0005480824 nova_compute[260089]:  </auth>
Oct 10 23:51:31 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:51:31 np0005480824 nova_compute[260089]:  <serial>b5cac214-f769-4b37-ac35-25810f98302d</serial>
Oct 10 23:51:31 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:51:31 np0005480824 nova_compute[260089]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.868 2 DEBUG nova.virt.libvirt.driver [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.869 2 DEBUG nova.virt.libvirt.driver [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.869 2 DEBUG nova.virt.libvirt.driver [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:51:31 np0005480824 nova_compute[260089]: 2025-10-11 03:51:31.869 2 DEBUG nova.virt.libvirt.driver [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] No VIF found with MAC fa:16:3e:84:9a:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.092 2 DEBUG oslo_concurrency.lockutils [None req-fec1c634-3321-4c24-8692-e531938bd7ff 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.276 2 DEBUG nova.network.neutron [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Updating instance_info_cache with network_info: [{"id": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "address": "fa:16:3e:10:2b:86", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d0ac82-b5", "ovs_interfaceid": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.293 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.295 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.296 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.296 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.300 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Releasing lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.300 2 DEBUG nova.compute.manager [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Instance network_info: |[{"id": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "address": "fa:16:3e:10:2b:86", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d0ac82-b5", "ovs_interfaceid": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.301 2 DEBUG oslo_concurrency.lockutils [req-7adf0d88-5f09-4d30-8be2-437d9a3adc24 req-df00d80a-dc40-4a40-8813-0efddb6b8e79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.302 2 DEBUG nova.network.neutron [req-7adf0d88-5f09-4d30-8be2-437d9a3adc24 req-df00d80a-dc40-4a40-8813-0efddb6b8e79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Refreshing network info cache for port a6d0ac82-b500-4962-8bfd-d36ef3ba2948 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.307 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Start _get_guest_xml network_info=[{"id": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "address": "fa:16:3e:10:2b:86", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d0ac82-b5", "ovs_interfaceid": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.312 2 WARNING nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.322 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.329 2 DEBUG nova.virt.libvirt.host [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.330 2 DEBUG nova.virt.libvirt.host [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.343 2 DEBUG nova.virt.libvirt.host [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.344 2 DEBUG nova.virt.libvirt.host [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.345 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.345 2 DEBUG nova.virt.hardware [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.346 2 DEBUG nova.virt.hardware [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.347 2 DEBUG nova.virt.hardware [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.347 2 DEBUG nova.virt.hardware [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.348 2 DEBUG nova.virt.hardware [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.348 2 DEBUG nova.virt.hardware [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.349 2 DEBUG nova.virt.hardware [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.349 2 DEBUG nova.virt.hardware [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.350 2 DEBUG nova.virt.hardware [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.350 2 DEBUG nova.virt.hardware [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.351 2 DEBUG nova.virt.hardware [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.356 2 DEBUG oslo_concurrency.processutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.459 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "refresh_cache-b11faa30-2b52-45e0-b5f2-dd05b5050493" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.460 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquired lock "refresh_cache-b11faa30-2b52-45e0-b5f2-dd05b5050493" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.461 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.461 2 DEBUG nova.objects.instance [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b11faa30-2b52-45e0-b5f2-dd05b5050493 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:51:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 213 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 269 op/s
Oct 10 23:51:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:51:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4118195179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.826 2 DEBUG oslo_concurrency.processutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Oct 10 23:51:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Oct 10 23:51:32 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.867 2 DEBUG nova.storage.rbd_utils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] rbd image d22b35e9-badc-40d1-952e-60cdfd60decb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:32 np0005480824 nova_compute[260089]: 2025-10-11 03:51:32.874 2 DEBUG oslo_concurrency.processutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:51:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2291375807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.361 2 DEBUG oslo_concurrency.processutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.363 2 DEBUG nova.virt.libvirt.vif [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:51:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1402368014',display_name='tempest-TestStampPattern-server-1402368014',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1402368014',id=10,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAimCn5RB/FvLKTbWbetTfaBYWY7YsxfCSNDCqy+0n9wsCRn+L8WUumxgKvSs5fbSkxaZ0JLw7ssb691wNMVrABVHOJ2APu3cO2oHOABFF8LDk8lk3BSAJi4zZFoYj4Rjw==',key_name='tempest-TestStampPattern-1826930411',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='944395b4a11c4a9182fda518dc7bd2d8',ramdisk_id='',reservation_id='r-m0jh1bgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-358096571',owner_user_name='tempest-TestStampPattern-358096571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:51:29Z,user_data=None,user_id='d6596329d9c842b78638fdbcf50b8ec8',uuid=d22b35e9-badc-40d1-952e-60cdfd60decb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "address": "fa:16:3e:10:2b:86", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d0ac82-b5", "ovs_interfaceid": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.363 2 DEBUG nova.network.os_vif_util [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Converting VIF {"id": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "address": "fa:16:3e:10:2b:86", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d0ac82-b5", "ovs_interfaceid": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.364 2 DEBUG nova.network.os_vif_util [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:2b:86,bridge_name='br-int',has_traffic_filtering=True,id=a6d0ac82-b500-4962-8bfd-d36ef3ba2948,network=Network(f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d0ac82-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.365 2 DEBUG nova.objects.instance [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lazy-loading 'pci_devices' on Instance uuid d22b35e9-badc-40d1-952e-60cdfd60decb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.382 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  <uuid>d22b35e9-badc-40d1-952e-60cdfd60decb</uuid>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  <name>instance-0000000a</name>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <nova:name>tempest-TestStampPattern-server-1402368014</nova:name>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:51:32</nova:creationTime>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:        <nova:user uuid="d6596329d9c842b78638fdbcf50b8ec8">tempest-TestStampPattern-358096571-project-member</nova:user>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:        <nova:project uuid="944395b4a11c4a9182fda518dc7bd2d8">tempest-TestStampPattern-358096571</nova:project>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:        <nova:port uuid="a6d0ac82-b500-4962-8bfd-d36ef3ba2948">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <entry name="serial">d22b35e9-badc-40d1-952e-60cdfd60decb</entry>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <entry name="uuid">d22b35e9-badc-40d1-952e-60cdfd60decb</entry>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/d22b35e9-badc-40d1-952e-60cdfd60decb_disk">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/d22b35e9-badc-40d1-952e-60cdfd60decb_disk.config">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:10:2b:86"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <target dev="tapa6d0ac82-b5"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/d22b35e9-badc-40d1-952e-60cdfd60decb/console.log" append="off"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:51:33 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:51:33 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:51:33 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:51:33 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.383 2 DEBUG nova.compute.manager [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Preparing to wait for external event network-vif-plugged-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.383 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.384 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.384 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.385 2 DEBUG nova.virt.libvirt.vif [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:51:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1402368014',display_name='tempest-TestStampPattern-server-1402368014',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1402368014',id=10,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAimCn5RB/FvLKTbWbetTfaBYWY7YsxfCSNDCqy+0n9wsCRn+L8WUumxgKvSs5fbSkxaZ0JLw7ssb691wNMVrABVHOJ2APu3cO2oHOABFF8LDk8lk3BSAJi4zZFoYj4Rjw==',key_name='tempest-TestStampPattern-1826930411',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='944395b4a11c4a9182fda518dc7bd2d8',ramdisk_id='',reservation_id='r-m0jh1bgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-358096571',owner_user_name='tempest-TestStampPattern-358096571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:51:29Z,user_data=None,user_id='d6596329d9c842b78638fdbcf50b8ec8',uuid=d22b35e9-badc-40d1-952e-60cdfd60decb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "address": "fa:16:3e:10:2b:86", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d0ac82-b5", "ovs_interfaceid": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.385 2 DEBUG nova.network.os_vif_util [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Converting VIF {"id": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "address": "fa:16:3e:10:2b:86", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d0ac82-b5", "ovs_interfaceid": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.385 2 DEBUG nova.network.os_vif_util [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:2b:86,bridge_name='br-int',has_traffic_filtering=True,id=a6d0ac82-b500-4962-8bfd-d36ef3ba2948,network=Network(f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d0ac82-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.386 2 DEBUG os_vif [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:2b:86,bridge_name='br-int',has_traffic_filtering=True,id=a6d0ac82-b500-4962-8bfd-d36ef3ba2948,network=Network(f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d0ac82-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.387 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.387 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.391 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6d0ac82-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.391 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6d0ac82-b5, col_values=(('external_ids', {'iface-id': 'a6d0ac82-b500-4962-8bfd-d36ef3ba2948', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:2b:86', 'vm-uuid': 'd22b35e9-badc-40d1-952e-60cdfd60decb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:33 np0005480824 NetworkManager[44969]: <info>  [1760154693.3949] manager: (tapa6d0ac82-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.404 2 INFO os_vif [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:2b:86,bridge_name='br-int',has_traffic_filtering=True,id=a6d0ac82-b500-4962-8bfd-d36ef3ba2948,network=Network(f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d0ac82-b5')#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.469 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.469 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.470 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No VIF found with MAC fa:16:3e:10:2b:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.470 2 INFO nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Using config drive#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.497 2 DEBUG nova.storage.rbd_utils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] rbd image d22b35e9-badc-40d1-952e-60cdfd60decb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:51:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1195838666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.705 2 DEBUG nova.network.neutron [req-7adf0d88-5f09-4d30-8be2-437d9a3adc24 req-df00d80a-dc40-4a40-8813-0efddb6b8e79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Updated VIF entry in instance network info cache for port a6d0ac82-b500-4962-8bfd-d36ef3ba2948. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.706 2 DEBUG nova.network.neutron [req-7adf0d88-5f09-4d30-8be2-437d9a3adc24 req-df00d80a-dc40-4a40-8813-0efddb6b8e79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Updating instance_info_cache with network_info: [{"id": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "address": "fa:16:3e:10:2b:86", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d0ac82-b5", "ovs_interfaceid": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.729 2 DEBUG oslo_concurrency.lockutils [req-7adf0d88-5f09-4d30-8be2-437d9a3adc24 req-df00d80a-dc40-4a40-8813-0efddb6b8e79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:51:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.853 2 INFO nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Creating config drive at /var/lib/nova/instances/d22b35e9-badc-40d1-952e-60cdfd60decb/disk.config#033[00m
Oct 10 23:51:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.864 2 DEBUG oslo_concurrency.processutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d22b35e9-badc-40d1-952e-60cdfd60decb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp86zdou2b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:33 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.899 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Updating instance_info_cache with network_info: [{"id": "3c111b42-0da1-4752-9b36-2df6a9486510", "address": "fa:16:3e:84:9a:72", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c111b42-0d", "ovs_interfaceid": "3c111b42-0da1-4752-9b36-2df6a9486510", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.919 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Releasing lock "refresh_cache-b11faa30-2b52-45e0-b5f2-dd05b5050493" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.919 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.920 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.949 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.949 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.949 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.949 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:51:33 np0005480824 nova_compute[260089]: 2025-10-11 03:51:33.950 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.012 2 DEBUG oslo_concurrency.processutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d22b35e9-badc-40d1-952e-60cdfd60decb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp86zdou2b" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.047 2 DEBUG nova.storage.rbd_utils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] rbd image d22b35e9-badc-40d1-952e-60cdfd60decb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.052 2 DEBUG oslo_concurrency.processutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d22b35e9-badc-40d1-952e-60cdfd60decb/disk.config d22b35e9-badc-40d1-952e-60cdfd60decb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.233 2 DEBUG oslo_concurrency.processutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d22b35e9-badc-40d1-952e-60cdfd60decb/disk.config d22b35e9-badc-40d1-952e-60cdfd60decb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.234 2 INFO nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Deleting local config drive /var/lib/nova/instances/d22b35e9-badc-40d1-952e-60cdfd60decb/disk.config because it was imported into RBD.#033[00m
Oct 10 23:51:34 np0005480824 kernel: tapa6d0ac82-b5: entered promiscuous mode
Oct 10 23:51:34 np0005480824 NetworkManager[44969]: <info>  [1760154694.3032] manager: (tapa6d0ac82-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Oct 10 23:51:34 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:34Z|00094|binding|INFO|Claiming lport a6d0ac82-b500-4962-8bfd-d36ef3ba2948 for this chassis.
Oct 10 23:51:34 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:34Z|00095|binding|INFO|a6d0ac82-b500-4962-8bfd-d36ef3ba2948: Claiming fa:16:3e:10:2b:86 10.100.0.11
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.318 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:2b:86 10.100.0.11'], port_security=['fa:16:3e:10:2b:86 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd22b35e9-badc-40d1-952e-60cdfd60decb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '944395b4a11c4a9182fda518dc7bd2d8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e77eea50-c642-4f6c-8fc0-1335adf52ced', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9edb3820-196e-493d-adad-15b8aa8d51aa, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=a6d0ac82-b500-4962-8bfd-d36ef3ba2948) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.320 162245 INFO neutron.agent.ovn.metadata.agent [-] Port a6d0ac82-b500-4962-8bfd-d36ef3ba2948 in datapath f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e bound to our chassis#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.323 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e#033[00m
Oct 10 23:51:34 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:34Z|00096|binding|INFO|Setting lport a6d0ac82-b500-4962-8bfd-d36ef3ba2948 up in Southbound
Oct 10 23:51:34 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:34Z|00097|binding|INFO|Setting lport a6d0ac82-b500-4962-8bfd-d36ef3ba2948 ovn-installed in OVS
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.341 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1643f9e0-31b3-418a-8cd0-8bc20f99d18f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.342 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf0e7e6a7-11 in ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.345 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf0e7e6a7-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.346 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b2836d-44c8-4cd1-94ab-284eb6f69d40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.350 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[77949bde-a06e-4af6-ae0c-83d7ad7aa81d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 systemd-machined[215071]: New machine qemu-10-instance-0000000a.
Oct 10 23:51:34 np0005480824 systemd-udevd[278074]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:51:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:51:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595025121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.368 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[9f29423d-c514-40f6-977e-078c1465e61d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Oct 10 23:51:34 np0005480824 NetworkManager[44969]: <info>  [1760154694.3739] device (tapa6d0ac82-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:51:34 np0005480824 NetworkManager[44969]: <info>  [1760154694.3746] device (tapa6d0ac82-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.387 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[284e57f2-b203-4dc6-b90a-36106e0a5f5b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.398 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.416 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[77149f0d-9cbf-4b74-96c8-97099222bf7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.421 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[eb620798-c39c-4860-a494-a2769640dcc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 systemd-udevd[278079]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:51:34 np0005480824 NetworkManager[44969]: <info>  [1760154694.4237] manager: (tapf0e7e6a7-10): new Veth device (/org/freedesktop/NetworkManager/Devices/65)
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.454 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7374cb-122b-4d8a-8c7f-28a37fab0b2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.458 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[6441601b-88b4-4dca-8642-32a309f47a88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 NetworkManager[44969]: <info>  [1760154694.4803] device (tapf0e7e6a7-10): carrier: link connected
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.485 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[a4014298-450d-4ba1-bcbe-d89dbf86503c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.494 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.494 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.494 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.497 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.497 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.503 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[bac5f7c9-060d-4d96-a18c-63fc18afc3b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0e7e6a7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:23:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416214, 'reachable_time': 37142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278108, 'error': None, 'target': 'ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.504 2 DEBUG nova.compute.manager [req-127f825d-6ef8-4ac9-8a2d-a49aec45bfa7 req-19b39b0a-5804-4727-a5ea-31753159fa12 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Received event network-vif-plugged-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.505 2 DEBUG oslo_concurrency.lockutils [req-127f825d-6ef8-4ac9-8a2d-a49aec45bfa7 req-19b39b0a-5804-4727-a5ea-31753159fa12 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.505 2 DEBUG oslo_concurrency.lockutils [req-127f825d-6ef8-4ac9-8a2d-a49aec45bfa7 req-19b39b0a-5804-4727-a5ea-31753159fa12 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.505 2 DEBUG oslo_concurrency.lockutils [req-127f825d-6ef8-4ac9-8a2d-a49aec45bfa7 req-19b39b0a-5804-4727-a5ea-31753159fa12 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.506 2 DEBUG nova.compute.manager [req-127f825d-6ef8-4ac9-8a2d-a49aec45bfa7 req-19b39b0a-5804-4727-a5ea-31753159fa12 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Processing event network-vif-plugged-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.521 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7d65807f-20eb-4209-9997-8a1f332ae565]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:2376'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 416214, 'tstamp': 416214}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278109, 'error': None, 'target': 'ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.542 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[83c78841-044c-426b-b337-af804b7bf8db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0e7e6a7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:23:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416214, 'reachable_time': 37142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278110, 'error': None, 'target': 'ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.578 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a8f732-30b4-44f0-a73f-c76fcd35d467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.632 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[effb99e3-8cb2-4c10-9c39-7198beb8fbc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.634 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0e7e6a7-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.635 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.635 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0e7e6a7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:34 np0005480824 NetworkManager[44969]: <info>  [1760154694.6384] manager: (tapf0e7e6a7-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct 10 23:51:34 np0005480824 kernel: tapf0e7e6a7-10: entered promiscuous mode
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.640 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0e7e6a7-10, col_values=(('external_ids', {'iface-id': 'fd35b05a-29b5-4478-aa1a-5883664f9c48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.642 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.643 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[98e1f023-6a43-430d-9dca-d73a9df78ffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.644 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e.pid.haproxy
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:51:34 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:34Z|00098|binding|INFO|Releasing lport fd35b05a-29b5-4478-aa1a-5883664f9c48 from this chassis (sb_readonly=0)
Oct 10 23:51:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:34.646 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'env', 'PROCESS_TAG=haproxy-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:51:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 213 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 3.6 MiB/s wr, 150 op/s
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.710 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.712 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4387MB free_disk=59.92196273803711GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.712 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.712 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.782 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance b11faa30-2b52-45e0-b5f2-dd05b5050493 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.782 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance d22b35e9-badc-40d1-952e-60cdfd60decb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.782 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.783 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.797 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing inventories for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.813 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updating ProviderTree inventory for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.813 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updating inventory in ProviderTree for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.828 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing aggregate associations for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.847 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing trait associations for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX,HW_CPU_X86_SHA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 10 23:51:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Oct 10 23:51:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Oct 10 23:51:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Oct 10 23:51:34 np0005480824 nova_compute[260089]: 2025-10-11 03:51:34.911 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:35 np0005480824 podman[278184]: 2025-10-11 03:51:35.068203698 +0000 UTC m=+0.090764333 container create 64ace6697da668597146157833bc20a7df2b1a93ed900f6b4a91dd115a16d1ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:51:35 np0005480824 podman[278184]: 2025-10-11 03:51:35.005522206 +0000 UTC m=+0.028082881 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:51:35 np0005480824 systemd[1]: Started libpod-conmon-64ace6697da668597146157833bc20a7df2b1a93ed900f6b4a91dd115a16d1ba.scope.
Oct 10 23:51:35 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:51:35 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2595de1d409fa70f4f87ad56ca9e3dfec7671a9510145d36a7f3cdba77f5e0ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:35 np0005480824 podman[278184]: 2025-10-11 03:51:35.19311268 +0000 UTC m=+0.215673335 container init 64ace6697da668597146157833bc20a7df2b1a93ed900f6b4a91dd115a16d1ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:51:35 np0005480824 podman[278184]: 2025-10-11 03:51:35.199127301 +0000 UTC m=+0.221687926 container start 64ace6697da668597146157833bc20a7df2b1a93ed900f6b4a91dd115a16d1ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 10 23:51:35 np0005480824 neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e[278218]: [NOTICE]   (278222) : New worker (278224) forked
Oct 10 23:51:35 np0005480824 neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e[278218]: [NOTICE]   (278222) : Loading success.
Oct 10 23:51:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:51:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3053437517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.425 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.433 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.447 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.464 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.464 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.494 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154695.4935613, d22b35e9-badc-40d1-952e-60cdfd60decb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.495 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] VM Started (Lifecycle Event)#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.498 2 DEBUG nova.compute.manager [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.503 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.509 2 INFO nova.virt.libvirt.driver [-] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Instance spawned successfully.#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.510 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.516 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.528 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.540 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.541 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.542 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.543 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.544 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.545 2 DEBUG nova.virt.libvirt.driver [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.554 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.554 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154695.4938772, d22b35e9-badc-40d1-952e-60cdfd60decb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.555 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.577 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.582 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154695.502395, d22b35e9-badc-40d1-952e-60cdfd60decb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.583 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.602 2 INFO nova.compute.manager [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Took 5.91 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.603 2 DEBUG nova.compute.manager [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.607 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.623 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.651 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.683 2 INFO nova.compute.manager [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Took 7.12 seconds to build instance.#033[00m
Oct 10 23:51:35 np0005480824 nova_compute[260089]: 2025-10-11 03:51:35.699 2 DEBUG oslo_concurrency.lockutils [None req-45e5f2cd-3d70-4611-92cd-f228615b90ec d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Oct 10 23:51:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Oct 10 23:51:35 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Oct 10 23:51:36 np0005480824 nova_compute[260089]: 2025-10-11 03:51:36.461 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:51:36 np0005480824 nova_compute[260089]: 2025-10-11 03:51:36.481 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:51:36 np0005480824 nova_compute[260089]: 2025-10-11 03:51:36.642 2 DEBUG nova.compute.manager [req-6cf9b279-7dbf-4156-a6f0-73ad15e2f974 req-ef4981bc-c783-40fd-8f43-2216ff485b8e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Received event network-vif-plugged-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:51:36 np0005480824 nova_compute[260089]: 2025-10-11 03:51:36.643 2 DEBUG oslo_concurrency.lockutils [req-6cf9b279-7dbf-4156-a6f0-73ad15e2f974 req-ef4981bc-c783-40fd-8f43-2216ff485b8e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:36 np0005480824 nova_compute[260089]: 2025-10-11 03:51:36.643 2 DEBUG oslo_concurrency.lockutils [req-6cf9b279-7dbf-4156-a6f0-73ad15e2f974 req-ef4981bc-c783-40fd-8f43-2216ff485b8e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:36 np0005480824 nova_compute[260089]: 2025-10-11 03:51:36.644 2 DEBUG oslo_concurrency.lockutils [req-6cf9b279-7dbf-4156-a6f0-73ad15e2f974 req-ef4981bc-c783-40fd-8f43-2216ff485b8e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:36 np0005480824 nova_compute[260089]: 2025-10-11 03:51:36.645 2 DEBUG nova.compute.manager [req-6cf9b279-7dbf-4156-a6f0-73ad15e2f974 req-ef4981bc-c783-40fd-8f43-2216ff485b8e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] No waiting events found dispatching network-vif-plugged-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:51:36 np0005480824 nova_compute[260089]: 2025-10-11 03:51:36.645 2 WARNING nova.compute.manager [req-6cf9b279-7dbf-4156-a6f0-73ad15e2f974 req-ef4981bc-c783-40fd-8f43-2216ff485b8e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Received unexpected event network-vif-plugged-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:51:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 213 MiB data, 336 MiB used, 60 GiB / 60 GiB avail
Oct 10 23:51:37 np0005480824 podman[278235]: 2025-10-11 03:51:37.02079194 +0000 UTC m=+0.072565655 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:51:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:51:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1133812293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011055244554600508 of space, bias 1.0, pg target 0.3316573366380153 quantized to 32 (current 32)
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003470144925766751 of space, bias 1.0, pg target 0.10410434777300254 quantized to 32 (current 32)
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:51:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:51:38 np0005480824 nova_compute[260089]: 2025-10-11 03:51:38.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:38 np0005480824 nova_compute[260089]: 2025-10-11 03:51:38.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 37 KiB/s wr, 236 op/s
Oct 10 23:51:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Oct 10 23:51:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Oct 10 23:51:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Oct 10 23:51:39 np0005480824 nova_compute[260089]: 2025-10-11 03:51:39.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:39 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:39.072 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:51:39 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:39.075 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:51:39 np0005480824 nova_compute[260089]: 2025-10-11 03:51:39.128 2 DEBUG nova.compute.manager [req-05846bac-4917-4159-93d2-1e15201d3119 req-551717d7-a21f-4558-a1d8-e6f2e7d0c792 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Received event network-changed-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:51:39 np0005480824 nova_compute[260089]: 2025-10-11 03:51:39.129 2 DEBUG nova.compute.manager [req-05846bac-4917-4159-93d2-1e15201d3119 req-551717d7-a21f-4558-a1d8-e6f2e7d0c792 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Refreshing instance network info cache due to event network-changed-a6d0ac82-b500-4962-8bfd-d36ef3ba2948. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:51:39 np0005480824 nova_compute[260089]: 2025-10-11 03:51:39.129 2 DEBUG oslo_concurrency.lockutils [req-05846bac-4917-4159-93d2-1e15201d3119 req-551717d7-a21f-4558-a1d8-e6f2e7d0c792 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:51:39 np0005480824 nova_compute[260089]: 2025-10-11 03:51:39.129 2 DEBUG oslo_concurrency.lockutils [req-05846bac-4917-4159-93d2-1e15201d3119 req-551717d7-a21f-4558-a1d8-e6f2e7d0c792 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:51:39 np0005480824 nova_compute[260089]: 2025-10-11 03:51:39.129 2 DEBUG nova.network.neutron [req-05846bac-4917-4159-93d2-1e15201d3119 req-551717d7-a21f-4558-a1d8-e6f2e7d0c792 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Refreshing network info cache for port a6d0ac82-b500-4962-8bfd-d36ef3ba2948 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:51:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:51:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4185190736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:51:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:51:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4185190736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:51:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:51:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Oct 10 23:51:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Oct 10 23:51:39 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Oct 10 23:51:40 np0005480824 nova_compute[260089]: 2025-10-11 03:51:40.169 2 DEBUG nova.network.neutron [req-05846bac-4917-4159-93d2-1e15201d3119 req-551717d7-a21f-4558-a1d8-e6f2e7d0c792 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Updated VIF entry in instance network info cache for port a6d0ac82-b500-4962-8bfd-d36ef3ba2948. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:51:40 np0005480824 nova_compute[260089]: 2025-10-11 03:51:40.171 2 DEBUG nova.network.neutron [req-05846bac-4917-4159-93d2-1e15201d3119 req-551717d7-a21f-4558-a1d8-e6f2e7d0c792 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Updating instance_info_cache with network_info: [{"id": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "address": "fa:16:3e:10:2b:86", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d0ac82-b5", "ovs_interfaceid": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:51:40 np0005480824 nova_compute[260089]: 2025-10-11 03:51:40.190 2 DEBUG oslo_concurrency.lockutils [req-05846bac-4917-4159-93d2-1e15201d3119 req-551717d7-a21f-4558-a1d8-e6f2e7d0c792 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:51:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 37 KiB/s wr, 238 op/s
Oct 10 23:51:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Oct 10 23:51:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Oct 10 23:51:41 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Oct 10 23:51:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:42.078 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:42 np0005480824 nova_compute[260089]: 2025-10-11 03:51:42.460 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154687.459192, c0950410-c7b0-4cc6-9994-7a0380d77b7e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:51:42 np0005480824 nova_compute[260089]: 2025-10-11 03:51:42.460 2 INFO nova.compute.manager [-] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:51:42 np0005480824 nova_compute[260089]: 2025-10-11 03:51:42.487 2 DEBUG nova.compute.manager [None req-7e4817f1-e23e-4e60-8551-52483efec6b4 - - - - - -] [instance: c0950410-c7b0-4cc6-9994-7a0380d77b7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:51:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 38 KiB/s wr, 354 op/s
Oct 10 23:51:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:51:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3613483385' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:51:43 np0005480824 nova_compute[260089]: 2025-10-11 03:51:43.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:43 np0005480824 nova_compute[260089]: 2025-10-11 03:51:43.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:43 np0005480824 podman[278526]: 2025-10-11 03:51:43.900329931 +0000 UTC m=+0.112148595 container create bcc4a43e7dd28ce05940843a970fcf96b8c771f636ee0659956024d78d34bd8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_sammet, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:51:43 np0005480824 podman[278526]: 2025-10-11 03:51:43.818285664 +0000 UTC m=+0.030104348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:51:43 np0005480824 systemd[1]: Started libpod-conmon-bcc4a43e7dd28ce05940843a970fcf96b8c771f636ee0659956024d78d34bd8e.scope.
Oct 10 23:51:43 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:51:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Oct 10 23:51:44 np0005480824 podman[278526]: 2025-10-11 03:51:44.014797737 +0000 UTC m=+0.226616421 container init bcc4a43e7dd28ce05940843a970fcf96b8c771f636ee0659956024d78d34bd8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_sammet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:51:44 np0005480824 podman[278526]: 2025-10-11 03:51:44.023724928 +0000 UTC m=+0.235543592 container start bcc4a43e7dd28ce05940843a970fcf96b8c771f636ee0659956024d78d34bd8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 10 23:51:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Oct 10 23:51:44 np0005480824 podman[278526]: 2025-10-11 03:51:44.029456662 +0000 UTC m=+0.241275326 container attach bcc4a43e7dd28ce05940843a970fcf96b8c771f636ee0659956024d78d34bd8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 10 23:51:44 np0005480824 festive_sammet[278543]: 167 167
Oct 10 23:51:44 np0005480824 systemd[1]: libpod-bcc4a43e7dd28ce05940843a970fcf96b8c771f636ee0659956024d78d34bd8e.scope: Deactivated successfully.
Oct 10 23:51:44 np0005480824 conmon[278543]: conmon bcc4a43e7dd28ce05940 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bcc4a43e7dd28ce05940843a970fcf96b8c771f636ee0659956024d78d34bd8e.scope/container/memory.events
Oct 10 23:51:44 np0005480824 podman[278526]: 2025-10-11 03:51:44.032005962 +0000 UTC m=+0.243824656 container died bcc4a43e7dd28ce05940843a970fcf96b8c771f636ee0659956024d78d34bd8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_sammet, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 10 23:51:44 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Oct 10 23:51:44 np0005480824 systemd[1]: var-lib-containers-storage-overlay-afd5126717afbd9367f1dd358ba4b423ec44e2f2a615c575a06107320d133804-merged.mount: Deactivated successfully.
Oct 10 23:51:44 np0005480824 podman[278526]: 2025-10-11 03:51:44.098530713 +0000 UTC m=+0.310349407 container remove bcc4a43e7dd28ce05940843a970fcf96b8c771f636ee0659956024d78d34bd8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_sammet, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 23:51:44 np0005480824 systemd[1]: libpod-conmon-bcc4a43e7dd28ce05940843a970fcf96b8c771f636ee0659956024d78d34bd8e.scope: Deactivated successfully.
Oct 10 23:51:44 np0005480824 podman[278567]: 2025-10-11 03:51:44.31517177 +0000 UTC m=+0.030881036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:51:44 np0005480824 podman[278567]: 2025-10-11 03:51:44.457812729 +0000 UTC m=+0.173521945 container create f26ea95010a04b5ede2fc3ff431197070783f9d89e4ca5533742ba990812241d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lumiere, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 10 23:51:44 np0005480824 systemd[1]: Started libpod-conmon-f26ea95010a04b5ede2fc3ff431197070783f9d89e4ca5533742ba990812241d.scope.
Oct 10 23:51:44 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:51:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b370b22ccebb8daf3b52292651287459379ab1eca452c17abdc2c5d7d05f2fa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b370b22ccebb8daf3b52292651287459379ab1eca452c17abdc2c5d7d05f2fa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b370b22ccebb8daf3b52292651287459379ab1eca452c17abdc2c5d7d05f2fa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b370b22ccebb8daf3b52292651287459379ab1eca452c17abdc2c5d7d05f2fa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:44 np0005480824 podman[278567]: 2025-10-11 03:51:44.563054 +0000 UTC m=+0.278763276 container init f26ea95010a04b5ede2fc3ff431197070783f9d89e4ca5533742ba990812241d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lumiere, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 23:51:44 np0005480824 podman[278567]: 2025-10-11 03:51:44.572258475 +0000 UTC m=+0.287967691 container start f26ea95010a04b5ede2fc3ff431197070783f9d89e4ca5533742ba990812241d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:51:44 np0005480824 podman[278567]: 2025-10-11 03:51:44.578356849 +0000 UTC m=+0.294066165 container attach f26ea95010a04b5ede2fc3ff431197070783f9d89e4ca5533742ba990812241d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lumiere, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:51:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:51:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.6 KiB/s wr, 132 op/s
Oct 10 23:51:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Oct 10 23:51:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Oct 10 23:51:45 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]: [
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:    {
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:        "available": false,
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:        "ceph_device": false,
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:        "lsm_data": {},
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:        "lvs": [],
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:        "path": "/dev/sr0",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:        "rejected_reasons": [
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "Has a FileSystem",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "Insufficient space (<5GB)"
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:        ],
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:        "sys_api": {
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "actuators": null,
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "device_nodes": "sr0",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "devname": "sr0",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "human_readable_size": "482.00 KB",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "id_bus": "ata",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "model": "QEMU DVD-ROM",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "nr_requests": "2",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "parent": "/dev/sr0",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "partitions": {},
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "path": "/dev/sr0",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "removable": "1",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "rev": "2.5+",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "ro": "0",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "rotational": "0",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "sas_address": "",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "sas_device_handle": "",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "scheduler_mode": "mq-deadline",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "sectors": 0,
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "sectorsize": "2048",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "size": 493568.0,
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "support_discard": "2048",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "type": "disk",
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:            "vendor": "QEMU"
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:        }
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]:    }
Oct 10 23:51:46 np0005480824 quizzical_lumiere[278584]: ]
Oct 10 23:51:46 np0005480824 systemd[1]: libpod-f26ea95010a04b5ede2fc3ff431197070783f9d89e4ca5533742ba990812241d.scope: Deactivated successfully.
Oct 10 23:51:46 np0005480824 podman[278567]: 2025-10-11 03:51:46.178496248 +0000 UTC m=+1.894205464 container died f26ea95010a04b5ede2fc3ff431197070783f9d89e4ca5533742ba990812241d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lumiere, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 23:51:46 np0005480824 systemd[1]: libpod-f26ea95010a04b5ede2fc3ff431197070783f9d89e4ca5533742ba990812241d.scope: Consumed 1.555s CPU time.
Oct 10 23:51:46 np0005480824 systemd[1]: var-lib-containers-storage-overlay-b370b22ccebb8daf3b52292651287459379ab1eca452c17abdc2c5d7d05f2fa2-merged.mount: Deactivated successfully.
Oct 10 23:51:46 np0005480824 podman[278567]: 2025-10-11 03:51:46.2714751 +0000 UTC m=+1.987184316 container remove f26ea95010a04b5ede2fc3ff431197070783f9d89e4ca5533742ba990812241d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lumiere, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 10 23:51:46 np0005480824 systemd[1]: libpod-conmon-f26ea95010a04b5ede2fc3ff431197070783f9d89e4ca5533742ba990812241d.scope: Deactivated successfully.
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:51:46 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 49011754-9f5f-4849-b481-08370191e428 does not exist
Oct 10 23:51:46 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev c81f6cab-e023-4940-a3c8-2861a56022d4 does not exist
Oct 10 23:51:46 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev e51a30da-cbcc-4bfb-90b2-f8a1d3115352 does not exist
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:51:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:51:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.5 KiB/s wr, 125 op/s
Oct 10 23:51:47 np0005480824 podman[281103]: 2025-10-11 03:51:47.304893193 +0000 UTC m=+0.052140625 container create 547d3eb488799c92313628431a07064f3f0810fa267e5491a51bec2338a4b36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_payne, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 10 23:51:47 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:51:47 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:51:47 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 23:51:47 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:51:47 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:51:47 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:51:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Oct 10 23:51:47 np0005480824 systemd[1]: Started libpod-conmon-547d3eb488799c92313628431a07064f3f0810fa267e5491a51bec2338a4b36b.scope.
Oct 10 23:51:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Oct 10 23:51:47 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Oct 10 23:51:47 np0005480824 podman[281103]: 2025-10-11 03:51:47.283586763 +0000 UTC m=+0.030834225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:51:47 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:51:47 np0005480824 podman[281103]: 2025-10-11 03:51:47.417014826 +0000 UTC m=+0.164262288 container init 547d3eb488799c92313628431a07064f3f0810fa267e5491a51bec2338a4b36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 10 23:51:47 np0005480824 podman[281103]: 2025-10-11 03:51:47.429643982 +0000 UTC m=+0.176891414 container start 547d3eb488799c92313628431a07064f3f0810fa267e5491a51bec2338a4b36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_payne, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:51:47 np0005480824 zen_payne[281119]: 167 167
Oct 10 23:51:47 np0005480824 systemd[1]: libpod-547d3eb488799c92313628431a07064f3f0810fa267e5491a51bec2338a4b36b.scope: Deactivated successfully.
Oct 10 23:51:47 np0005480824 podman[281103]: 2025-10-11 03:51:47.435982251 +0000 UTC m=+0.183229713 container attach 547d3eb488799c92313628431a07064f3f0810fa267e5491a51bec2338a4b36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_payne, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:51:47 np0005480824 conmon[281119]: conmon 547d3eb488799c923136 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-547d3eb488799c92313628431a07064f3f0810fa267e5491a51bec2338a4b36b.scope/container/memory.events
Oct 10 23:51:47 np0005480824 podman[281103]: 2025-10-11 03:51:47.437804094 +0000 UTC m=+0.185051526 container died 547d3eb488799c92313628431a07064f3f0810fa267e5491a51bec2338a4b36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_payne, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:51:47 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9e5e3c5d04fc7fb2d093a8f9b1d4b98d7bb4d3b207d02f3b9899d480cb47eae0-merged.mount: Deactivated successfully.
Oct 10 23:51:47 np0005480824 podman[281103]: 2025-10-11 03:51:47.481955731 +0000 UTC m=+0.229203163 container remove 547d3eb488799c92313628431a07064f3f0810fa267e5491a51bec2338a4b36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 23:51:47 np0005480824 systemd[1]: libpod-conmon-547d3eb488799c92313628431a07064f3f0810fa267e5491a51bec2338a4b36b.scope: Deactivated successfully.
Oct 10 23:51:47 np0005480824 podman[281142]: 2025-10-11 03:51:47.713722712 +0000 UTC m=+0.060509422 container create 87046bd8ecc9e2ac1685510291d700fe0c9369249ff011e453926cebbb2b30f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:51:47 np0005480824 systemd[1]: Started libpod-conmon-87046bd8ecc9e2ac1685510291d700fe0c9369249ff011e453926cebbb2b30f6.scope.
Oct 10 23:51:47 np0005480824 podman[281142]: 2025-10-11 03:51:47.695347071 +0000 UTC m=+0.042133801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:51:47 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:51:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428a7e313acf646e0e17fc82a2e1577ee859b8ad78f54b3ad4f4b15d9b79f43d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428a7e313acf646e0e17fc82a2e1577ee859b8ad78f54b3ad4f4b15d9b79f43d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428a7e313acf646e0e17fc82a2e1577ee859b8ad78f54b3ad4f4b15d9b79f43d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428a7e313acf646e0e17fc82a2e1577ee859b8ad78f54b3ad4f4b15d9b79f43d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:47 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428a7e313acf646e0e17fc82a2e1577ee859b8ad78f54b3ad4f4b15d9b79f43d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:47 np0005480824 podman[281142]: 2025-10-11 03:51:47.837668602 +0000 UTC m=+0.184455332 container init 87046bd8ecc9e2ac1685510291d700fe0c9369249ff011e453926cebbb2b30f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shamir, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 10 23:51:47 np0005480824 podman[281142]: 2025-10-11 03:51:47.856794031 +0000 UTC m=+0.203580771 container start 87046bd8ecc9e2ac1685510291d700fe0c9369249ff011e453926cebbb2b30f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 10 23:51:47 np0005480824 podman[281142]: 2025-10-11 03:51:47.861275117 +0000 UTC m=+0.208061857 container attach 87046bd8ecc9e2ac1685510291d700fe0c9369249ff011e453926cebbb2b30f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shamir, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:51:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Oct 10 23:51:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Oct 10 23:51:48 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Oct 10 23:51:48 np0005480824 nova_compute[260089]: 2025-10-11 03:51:48.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:48 np0005480824 nova_compute[260089]: 2025-10-11 03:51:48.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 228 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 120 KiB/s rd, 3.3 MiB/s wr, 123 op/s
Oct 10 23:51:49 np0005480824 bold_shamir[281156]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:51:49 np0005480824 bold_shamir[281156]: --> relative data size: 1.0
Oct 10 23:51:49 np0005480824 bold_shamir[281156]: --> All data devices are unavailable
Oct 10 23:51:49 np0005480824 systemd[1]: libpod-87046bd8ecc9e2ac1685510291d700fe0c9369249ff011e453926cebbb2b30f6.scope: Deactivated successfully.
Oct 10 23:51:49 np0005480824 systemd[1]: libpod-87046bd8ecc9e2ac1685510291d700fe0c9369249ff011e453926cebbb2b30f6.scope: Consumed 1.081s CPU time.
Oct 10 23:51:49 np0005480824 podman[281142]: 2025-10-11 03:51:49.037714957 +0000 UTC m=+1.384501707 container died 87046bd8ecc9e2ac1685510291d700fe0c9369249ff011e453926cebbb2b30f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:51:49 np0005480824 systemd[1]: var-lib-containers-storage-overlay-428a7e313acf646e0e17fc82a2e1577ee859b8ad78f54b3ad4f4b15d9b79f43d-merged.mount: Deactivated successfully.
Oct 10 23:51:49 np0005480824 podman[281142]: 2025-10-11 03:51:49.105081329 +0000 UTC m=+1.451868049 container remove 87046bd8ecc9e2ac1685510291d700fe0c9369249ff011e453926cebbb2b30f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:51:49 np0005480824 systemd[1]: libpod-conmon-87046bd8ecc9e2ac1685510291d700fe0c9369249ff011e453926cebbb2b30f6.scope: Deactivated successfully.
Oct 10 23:51:49 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:49Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:2b:86 10.100.0.11
Oct 10 23:51:49 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:49Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:2b:86 10.100.0.11
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.667455) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154709667566, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1469, "num_deletes": 268, "total_data_size": 1860636, "memory_usage": 1888752, "flush_reason": "Manual Compaction"}
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154709681773, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1813262, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23597, "largest_seqno": 25065, "table_properties": {"data_size": 1806179, "index_size": 4094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15727, "raw_average_key_size": 20, "raw_value_size": 1791573, "raw_average_value_size": 2351, "num_data_blocks": 180, "num_entries": 762, "num_filter_entries": 762, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154634, "oldest_key_time": 1760154634, "file_creation_time": 1760154709, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 14364 microseconds, and 9649 cpu microseconds.
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.681841) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1813262 bytes OK
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.681870) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.684407) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.684477) EVENT_LOG_v1 {"time_micros": 1760154709684462, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.684515) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1853745, prev total WAL file size 1853745, number of live WAL files 2.
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.685903) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1770KB)], [53(8773KB)]
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154709685957, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10797495, "oldest_snapshot_seqno": -1}
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5251 keys, 10702190 bytes, temperature: kUnknown
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154709758346, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 10702190, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10660824, "index_size": 27112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 130251, "raw_average_key_size": 24, "raw_value_size": 10560038, "raw_average_value_size": 2011, "num_data_blocks": 1126, "num_entries": 5251, "num_filter_entries": 5251, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760154709, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.758761) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10702190 bytes
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.760501) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.9 rd, 147.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.6 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(11.9) write-amplify(5.9) OK, records in: 5794, records dropped: 543 output_compression: NoCompression
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.760523) EVENT_LOG_v1 {"time_micros": 1760154709760512, "job": 28, "event": "compaction_finished", "compaction_time_micros": 72531, "compaction_time_cpu_micros": 45653, "output_level": 6, "num_output_files": 1, "total_output_size": 10702190, "num_input_records": 5794, "num_output_records": 5251, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154709761207, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154709764257, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.685735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.764500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.764518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.764523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.764527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:51:49 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:51:49.764531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:51:49 np0005480824 podman[281338]: 2025-10-11 03:51:49.924997268 +0000 UTC m=+0.059579059 container create ed214377e880bc1e6e61ee718b79b3d4453cb0d093751691c3a0aa19c1255616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:51:49 np0005480824 systemd[1]: Started libpod-conmon-ed214377e880bc1e6e61ee718b79b3d4453cb0d093751691c3a0aa19c1255616.scope.
Oct 10 23:51:49 np0005480824 nova_compute[260089]: 2025-10-11 03:51:49.968 2 DEBUG oslo_concurrency.lockutils [None req-01db80bc-4574-4c6d-9d9d-02e30ad15f82 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:49 np0005480824 nova_compute[260089]: 2025-10-11 03:51:49.971 2 DEBUG oslo_concurrency.lockutils [None req-01db80bc-4574-4c6d-9d9d-02e30ad15f82 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:49 np0005480824 nova_compute[260089]: 2025-10-11 03:51:49.988 2 INFO nova.compute.manager [None req-01db80bc-4574-4c6d-9d9d-02e30ad15f82 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Detaching volume b5cac214-f769-4b37-ac35-25810f98302d#033[00m
Oct 10 23:51:49 np0005480824 podman[281338]: 2025-10-11 03:51:49.896452959 +0000 UTC m=+0.031034830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:51:49 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:51:50 np0005480824 podman[281338]: 2025-10-11 03:51:50.007584427 +0000 UTC m=+0.142166228 container init ed214377e880bc1e6e61ee718b79b3d4453cb0d093751691c3a0aa19c1255616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 10 23:51:50 np0005480824 podman[281338]: 2025-10-11 03:51:50.018837502 +0000 UTC m=+0.153419313 container start ed214377e880bc1e6e61ee718b79b3d4453cb0d093751691c3a0aa19c1255616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 10 23:51:50 np0005480824 podman[281338]: 2025-10-11 03:51:50.022190351 +0000 UTC m=+0.156772152 container attach ed214377e880bc1e6e61ee718b79b3d4453cb0d093751691c3a0aa19c1255616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:51:50 np0005480824 cranky_dirac[281354]: 167 167
Oct 10 23:51:50 np0005480824 systemd[1]: libpod-ed214377e880bc1e6e61ee718b79b3d4453cb0d093751691c3a0aa19c1255616.scope: Deactivated successfully.
Oct 10 23:51:50 np0005480824 podman[281338]: 2025-10-11 03:51:50.025895048 +0000 UTC m=+0.160476869 container died ed214377e880bc1e6e61ee718b79b3d4453cb0d093751691c3a0aa19c1255616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 10 23:51:50 np0005480824 systemd[1]: var-lib-containers-storage-overlay-34568580f45ec548099e24c58bbf5a471d94e4a353fdfe17f05c0a390955fbea-merged.mount: Deactivated successfully.
Oct 10 23:51:50 np0005480824 podman[281338]: 2025-10-11 03:51:50.063646854 +0000 UTC m=+0.198228665 container remove ed214377e880bc1e6e61ee718b79b3d4453cb0d093751691c3a0aa19c1255616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:51:50 np0005480824 systemd[1]: libpod-conmon-ed214377e880bc1e6e61ee718b79b3d4453cb0d093751691c3a0aa19c1255616.scope: Deactivated successfully.
Oct 10 23:51:50 np0005480824 nova_compute[260089]: 2025-10-11 03:51:50.145 2 INFO nova.virt.block_device [None req-01db80bc-4574-4c6d-9d9d-02e30ad15f82 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Attempting to driver detach volume b5cac214-f769-4b37-ac35-25810f98302d from mountpoint /dev/vdb#033[00m
Oct 10 23:51:50 np0005480824 nova_compute[260089]: 2025-10-11 03:51:50.158 2 DEBUG nova.virt.libvirt.driver [None req-01db80bc-4574-4c6d-9d9d-02e30ad15f82 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Attempting to detach device vdb from instance b11faa30-2b52-45e0-b5f2-dd05b5050493 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 10 23:51:50 np0005480824 nova_compute[260089]: 2025-10-11 03:51:50.159 2 DEBUG nova.virt.libvirt.guest [None req-01db80bc-4574-4c6d-9d9d-02e30ad15f82 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:51:50 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:51:50 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-b5cac214-f769-4b37-ac35-25810f98302d">
Oct 10 23:51:50 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:51:50 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:51:50 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:51:50 np0005480824 nova_compute[260089]:  <serial>b5cac214-f769-4b37-ac35-25810f98302d</serial>
Oct 10 23:51:50 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:51:50 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:51:50 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:51:50 np0005480824 nova_compute[260089]: 2025-10-11 03:51:50.172 2 INFO nova.virt.libvirt.driver [None req-01db80bc-4574-4c6d-9d9d-02e30ad15f82 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Successfully detached device vdb from instance b11faa30-2b52-45e0-b5f2-dd05b5050493 from the persistent domain config.#033[00m
Oct 10 23:51:50 np0005480824 nova_compute[260089]: 2025-10-11 03:51:50.173 2 DEBUG nova.virt.libvirt.driver [None req-01db80bc-4574-4c6d-9d9d-02e30ad15f82 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b11faa30-2b52-45e0-b5f2-dd05b5050493 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 10 23:51:50 np0005480824 nova_compute[260089]: 2025-10-11 03:51:50.173 2 DEBUG nova.virt.libvirt.guest [None req-01db80bc-4574-4c6d-9d9d-02e30ad15f82 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:51:50 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:51:50 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-b5cac214-f769-4b37-ac35-25810f98302d">
Oct 10 23:51:50 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:51:50 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:51:50 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:51:50 np0005480824 nova_compute[260089]:  <serial>b5cac214-f769-4b37-ac35-25810f98302d</serial>
Oct 10 23:51:50 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:51:50 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:51:50 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:51:50 np0005480824 nova_compute[260089]: 2025-10-11 03:51:50.302 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Received event <DeviceRemovedEvent: 1760154710.3019478, b11faa30-2b52-45e0-b5f2-dd05b5050493 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 10 23:51:50 np0005480824 nova_compute[260089]: 2025-10-11 03:51:50.309 2 DEBUG nova.virt.libvirt.driver [None req-01db80bc-4574-4c6d-9d9d-02e30ad15f82 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b11faa30-2b52-45e0-b5f2-dd05b5050493 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 10 23:51:50 np0005480824 nova_compute[260089]: 2025-10-11 03:51:50.315 2 INFO nova.virt.libvirt.driver [None req-01db80bc-4574-4c6d-9d9d-02e30ad15f82 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Successfully detached device vdb from instance b11faa30-2b52-45e0-b5f2-dd05b5050493 from the live domain config.#033[00m
Oct 10 23:51:50 np0005480824 podman[281378]: 2025-10-11 03:51:50.333760716 +0000 UTC m=+0.071198222 container create 6d9723c7dd0731ae08c47a91301e11af084167a61ef79393fa1f3e218ca8cc0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_germain, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:51:50 np0005480824 systemd[1]: Started libpod-conmon-6d9723c7dd0731ae08c47a91301e11af084167a61ef79393fa1f3e218ca8cc0a.scope.
Oct 10 23:51:50 np0005480824 podman[281378]: 2025-10-11 03:51:50.296912191 +0000 UTC m=+0.034349747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:51:50 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:51:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c888e25faf7f18561b41fe5bd18872461e1c157c6f81e560303b6059695c06d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c888e25faf7f18561b41fe5bd18872461e1c157c6f81e560303b6059695c06d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c888e25faf7f18561b41fe5bd18872461e1c157c6f81e560303b6059695c06d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c888e25faf7f18561b41fe5bd18872461e1c157c6f81e560303b6059695c06d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:50 np0005480824 podman[281378]: 2025-10-11 03:51:50.444282371 +0000 UTC m=+0.181719937 container init 6d9723c7dd0731ae08c47a91301e11af084167a61ef79393fa1f3e218ca8cc0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_germain, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 10 23:51:50 np0005480824 podman[281378]: 2025-10-11 03:51:50.456085668 +0000 UTC m=+0.193523174 container start 6d9723c7dd0731ae08c47a91301e11af084167a61ef79393fa1f3e218ca8cc0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_germain, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:51:50 np0005480824 podman[281378]: 2025-10-11 03:51:50.461731161 +0000 UTC m=+0.199168717 container attach 6d9723c7dd0731ae08c47a91301e11af084167a61ef79393fa1f3e218ca8cc0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_germain, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 10 23:51:50 np0005480824 nova_compute[260089]: 2025-10-11 03:51:50.475 2 DEBUG nova.objects.instance [None req-01db80bc-4574-4c6d-9d9d-02e30ad15f82 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lazy-loading 'flavor' on Instance uuid b11faa30-2b52-45e0-b5f2-dd05b5050493 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:51:50 np0005480824 nova_compute[260089]: 2025-10-11 03:51:50.511 2 DEBUG oslo_concurrency.lockutils [None req-01db80bc-4574-4c6d-9d9d-02e30ad15f82 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 228 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 2.7 MiB/s wr, 101 op/s
Oct 10 23:51:51 np0005480824 gifted_germain[281397]: {
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:    "0": [
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:        {
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "devices": [
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "/dev/loop3"
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            ],
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_name": "ceph_lv0",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_size": "21470642176",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "name": "ceph_lv0",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "tags": {
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.cluster_name": "ceph",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.crush_device_class": "",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.encrypted": "0",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.osd_id": "0",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.type": "block",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.vdo": "0"
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            },
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "type": "block",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "vg_name": "ceph_vg0"
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:        }
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:    ],
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:    "1": [
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:        {
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "devices": [
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "/dev/loop4"
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            ],
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_name": "ceph_lv1",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_size": "21470642176",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "name": "ceph_lv1",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "tags": {
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.cluster_name": "ceph",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.crush_device_class": "",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.encrypted": "0",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.osd_id": "1",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.type": "block",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.vdo": "0"
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            },
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "type": "block",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "vg_name": "ceph_vg1"
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:        }
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:    ],
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:    "2": [
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:        {
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "devices": [
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "/dev/loop5"
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            ],
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_name": "ceph_lv2",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_size": "21470642176",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "name": "ceph_lv2",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "tags": {
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.cluster_name": "ceph",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.crush_device_class": "",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.encrypted": "0",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.osd_id": "2",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.type": "block",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:                "ceph.vdo": "0"
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            },
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "type": "block",
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:            "vg_name": "ceph_vg2"
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:        }
Oct 10 23:51:51 np0005480824 gifted_germain[281397]:    ]
Oct 10 23:51:51 np0005480824 gifted_germain[281397]: }
Oct 10 23:51:51 np0005480824 systemd[1]: libpod-6d9723c7dd0731ae08c47a91301e11af084167a61ef79393fa1f3e218ca8cc0a.scope: Deactivated successfully.
Oct 10 23:51:51 np0005480824 podman[281378]: 2025-10-11 03:51:51.301894966 +0000 UTC m=+1.039332432 container died 6d9723c7dd0731ae08c47a91301e11af084167a61ef79393fa1f3e218ca8cc0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_germain, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 10 23:51:51 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3c888e25faf7f18561b41fe5bd18872461e1c157c6f81e560303b6059695c06d-merged.mount: Deactivated successfully.
Oct 10 23:51:51 np0005480824 podman[281378]: 2025-10-11 03:51:51.359643702 +0000 UTC m=+1.097081168 container remove 6d9723c7dd0731ae08c47a91301e11af084167a61ef79393fa1f3e218ca8cc0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_germain, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:51:51 np0005480824 systemd[1]: libpod-conmon-6d9723c7dd0731ae08c47a91301e11af084167a61ef79393fa1f3e218ca8cc0a.scope: Deactivated successfully.
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.438 2 DEBUG oslo_concurrency.lockutils [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.439 2 DEBUG oslo_concurrency.lockutils [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.439 2 DEBUG oslo_concurrency.lockutils [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.440 2 DEBUG oslo_concurrency.lockutils [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.440 2 DEBUG oslo_concurrency.lockutils [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.441 2 INFO nova.compute.manager [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Terminating instance#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.443 2 DEBUG nova.compute.manager [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:51:51 np0005480824 kernel: tap3c111b42-0d (unregistering): left promiscuous mode
Oct 10 23:51:51 np0005480824 NetworkManager[44969]: <info>  [1760154711.5053] device (tap3c111b42-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:51 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:51Z|00099|binding|INFO|Releasing lport 3c111b42-0da1-4752-9b36-2df6a9486510 from this chassis (sb_readonly=0)
Oct 10 23:51:51 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:51Z|00100|binding|INFO|Setting lport 3c111b42-0da1-4752-9b36-2df6a9486510 down in Southbound
Oct 10 23:51:51 np0005480824 ovn_controller[152667]: 2025-10-11T03:51:51Z|00101|binding|INFO|Removing iface tap3c111b42-0d ovn-installed in OVS
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:51.529 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:9a:72 10.100.0.9'], port_security=['fa:16:3e:84:9a:72 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b11faa30-2b52-45e0-b5f2-dd05b5050493', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '69ce475b5af645b7b89607f7ecc196d5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4f5c97de-169f-4c73-b4fd-2b99fea347b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8dd8adb-2052-443b-8fa5-01e320e55d02, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=3c111b42-0da1-4752-9b36-2df6a9486510) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:51:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:51.530 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 3c111b42-0da1-4752-9b36-2df6a9486510 in datapath 53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0 unbound from our chassis#033[00m
Oct 10 23:51:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:51.531 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:51:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:51.534 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[5764698b-ffb0-420e-be16-6f39a68e6fbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:51.535 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0 namespace which is not needed anymore#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:51 np0005480824 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct 10 23:51:51 np0005480824 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 15.667s CPU time.
Oct 10 23:51:51 np0005480824 systemd-machined[215071]: Machine qemu-8-instance-00000008 terminated.
Oct 10 23:51:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Oct 10 23:51:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Oct 10 23:51:51 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.680 2 INFO nova.virt.libvirt.driver [-] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Instance destroyed successfully.#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.683 2 DEBUG nova.objects.instance [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lazy-loading 'resources' on Instance uuid b11faa30-2b52-45e0-b5f2-dd05b5050493 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:51:51 np0005480824 neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0[277206]: [NOTICE]   (277211) : haproxy version is 2.8.14-c23fe91
Oct 10 23:51:51 np0005480824 neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0[277206]: [NOTICE]   (277211) : path to executable is /usr/sbin/haproxy
Oct 10 23:51:51 np0005480824 neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0[277206]: [WARNING]  (277211) : Exiting Master process...
Oct 10 23:51:51 np0005480824 neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0[277206]: [ALERT]    (277211) : Current worker (277213) exited with code 143 (Terminated)
Oct 10 23:51:51 np0005480824 neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0[277206]: [WARNING]  (277211) : All workers exited. Exiting... (0)
Oct 10 23:51:51 np0005480824 systemd[1]: libpod-d6130f0e932e3e796c70d8885498316dcadfa1dcf2de952907715ce71819451d.scope: Deactivated successfully.
Oct 10 23:51:51 np0005480824 podman[281511]: 2025-10-11 03:51:51.699225475 +0000 UTC m=+0.062677893 container died d6130f0e932e3e796c70d8885498316dcadfa1dcf2de952907715ce71819451d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.707 2 DEBUG nova.virt.libvirt.vif [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:51:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-840258588',display_name='tempest-VolumesBackupsTest-instance-840258588',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-840258588',id=8,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtSOTVNRsdtWhisafwSlo870EnSG9pK9SQO/x/iRe7bsz603dHApUhtqM/qxiNKJaYNpJ6pOnwb0vEkRahc2fbOAUYyeOiooHGledRT7nCnxhw4o4XzozntA+vU4Zea9g==',key_name='tempest-keypair-1624285471',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:51:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='69ce475b5af645b7b89607f7ecc196d5',ramdisk_id='',reservation_id='r-n2u0milr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1570005285',owner_user_name='tempest-VolumesBackupsTest-1570005285-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:51:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0dd21dcc2e2e4870bd3a6eb5146bc451',uuid=b11faa30-2b52-45e0-b5f2-dd05b5050493,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c111b42-0da1-4752-9b36-2df6a9486510", "address": "fa:16:3e:84:9a:72", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c111b42-0d", "ovs_interfaceid": "3c111b42-0da1-4752-9b36-2df6a9486510", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.708 2 DEBUG nova.network.os_vif_util [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Converting VIF {"id": "3c111b42-0da1-4752-9b36-2df6a9486510", "address": "fa:16:3e:84:9a:72", "network": {"id": "53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-198655629-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "69ce475b5af645b7b89607f7ecc196d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c111b42-0d", "ovs_interfaceid": "3c111b42-0da1-4752-9b36-2df6a9486510", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.709 2 DEBUG nova.network.os_vif_util [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:9a:72,bridge_name='br-int',has_traffic_filtering=True,id=3c111b42-0da1-4752-9b36-2df6a9486510,network=Network(53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c111b42-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.710 2 DEBUG os_vif [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:9a:72,bridge_name='br-int',has_traffic_filtering=True,id=3c111b42-0da1-4752-9b36-2df6a9486510,network=Network(53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c111b42-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c111b42-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.719 2 INFO os_vif [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:9a:72,bridge_name='br-int',has_traffic_filtering=True,id=3c111b42-0da1-4752-9b36-2df6a9486510,network=Network(53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c111b42-0d')#033[00m
Oct 10 23:51:51 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6130f0e932e3e796c70d8885498316dcadfa1dcf2de952907715ce71819451d-userdata-shm.mount: Deactivated successfully.
Oct 10 23:51:51 np0005480824 systemd[1]: var-lib-containers-storage-overlay-7121b1bb49edad82ae55f53a91e24a72d90dfdc61139b1c966efd24ef6a7edf1-merged.mount: Deactivated successfully.
Oct 10 23:51:51 np0005480824 podman[281511]: 2025-10-11 03:51:51.753861218 +0000 UTC m=+0.117313606 container cleanup d6130f0e932e3e796c70d8885498316dcadfa1dcf2de952907715ce71819451d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 10 23:51:51 np0005480824 systemd[1]: libpod-conmon-d6130f0e932e3e796c70d8885498316dcadfa1dcf2de952907715ce71819451d.scope: Deactivated successfully.
Oct 10 23:51:51 np0005480824 podman[281598]: 2025-10-11 03:51:51.826423461 +0000 UTC m=+0.048197442 container remove d6130f0e932e3e796c70d8885498316dcadfa1dcf2de952907715ce71819451d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:51:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:51.836 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[08e254d3-0dba-4b41-b81b-42bba92caada]: (4, ('Sat Oct 11 03:51:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0 (d6130f0e932e3e796c70d8885498316dcadfa1dcf2de952907715ce71819451d)\nd6130f0e932e3e796c70d8885498316dcadfa1dcf2de952907715ce71819451d\nSat Oct 11 03:51:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0 (d6130f0e932e3e796c70d8885498316dcadfa1dcf2de952907715ce71819451d)\nd6130f0e932e3e796c70d8885498316dcadfa1dcf2de952907715ce71819451d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:51.838 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9f210290-7986-4d50-aef6-f3be757f180d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:51.840 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53e5ffdf-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:51:51 np0005480824 kernel: tap53e5ffdf-10: left promiscuous mode
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:51 np0005480824 nova_compute[260089]: 2025-10-11 03:51:51.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:51.871 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[de9252b0-eae1-48b4-a711-e9cdd01b0194]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:51.898 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b0db25bc-042c-4847-a7f2-337db4b7acc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:51.900 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[bc59979f-6ef3-414b-8915-479e64e0efc4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:51.919 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[82f95031-fc6d-4ed7-a691-dd5121e595cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 413778, 'reachable_time': 17011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281628, 'error': None, 'target': 'ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:51.923 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-53e5ffdf-1a4b-4db5-b1e7-a9b6a7b01fd0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:51:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:51:51.923 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[c70275b0-1637-4355-9944-1a99699c7f1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:51:51 np0005480824 systemd[1]: run-netns-ovnmeta\x2d53e5ffdf\x2d1a4b\x2d4db5\x2db1e7\x2da9b6a7b01fd0.mount: Deactivated successfully.
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.067 2 DEBUG nova.compute.manager [req-5701cfdd-29bb-4cb6-97dd-69b00bf6f156 req-dc888609-6992-49b6-b593-ffdf2ab6f91a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Received event network-vif-unplugged-3c111b42-0da1-4752-9b36-2df6a9486510 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.067 2 DEBUG oslo_concurrency.lockutils [req-5701cfdd-29bb-4cb6-97dd-69b00bf6f156 req-dc888609-6992-49b6-b593-ffdf2ab6f91a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.067 2 DEBUG oslo_concurrency.lockutils [req-5701cfdd-29bb-4cb6-97dd-69b00bf6f156 req-dc888609-6992-49b6-b593-ffdf2ab6f91a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.068 2 DEBUG oslo_concurrency.lockutils [req-5701cfdd-29bb-4cb6-97dd-69b00bf6f156 req-dc888609-6992-49b6-b593-ffdf2ab6f91a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.068 2 DEBUG nova.compute.manager [req-5701cfdd-29bb-4cb6-97dd-69b00bf6f156 req-dc888609-6992-49b6-b593-ffdf2ab6f91a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] No waiting events found dispatching network-vif-unplugged-3c111b42-0da1-4752-9b36-2df6a9486510 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.069 2 DEBUG nova.compute.manager [req-5701cfdd-29bb-4cb6-97dd-69b00bf6f156 req-dc888609-6992-49b6-b593-ffdf2ab6f91a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Received event network-vif-unplugged-3c111b42-0da1-4752-9b36-2df6a9486510 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:51:52 np0005480824 podman[281656]: 2025-10-11 03:51:52.125303118 +0000 UTC m=+0.057328257 container create 48443f7ba2e62dae154743b68aed44abd18724392cb380748922d8950fcd6fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 23:51:52 np0005480824 systemd[1]: Started libpod-conmon-48443f7ba2e62dae154743b68aed44abd18724392cb380748922d8950fcd6fc8.scope.
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.181 2 INFO nova.virt.libvirt.driver [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Deleting instance files /var/lib/nova/instances/b11faa30-2b52-45e0-b5f2-dd05b5050493_del#033[00m
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.182 2 INFO nova.virt.libvirt.driver [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Deletion of /var/lib/nova/instances/b11faa30-2b52-45e0-b5f2-dd05b5050493_del complete#033[00m
Oct 10 23:51:52 np0005480824 podman[281656]: 2025-10-11 03:51:52.097948216 +0000 UTC m=+0.029973445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:51:52 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:51:52 np0005480824 podman[281656]: 2025-10-11 03:51:52.21013344 +0000 UTC m=+0.142158579 container init 48443f7ba2e62dae154743b68aed44abd18724392cb380748922d8950fcd6fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:51:52 np0005480824 podman[281656]: 2025-10-11 03:51:52.221499617 +0000 UTC m=+0.153524746 container start 48443f7ba2e62dae154743b68aed44abd18724392cb380748922d8950fcd6fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:51:52 np0005480824 podman[281656]: 2025-10-11 03:51:52.225077281 +0000 UTC m=+0.157102430 container attach 48443f7ba2e62dae154743b68aed44abd18724392cb380748922d8950fcd6fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:51:52 np0005480824 eloquent_perlman[281673]: 167 167
Oct 10 23:51:52 np0005480824 systemd[1]: libpod-48443f7ba2e62dae154743b68aed44abd18724392cb380748922d8950fcd6fc8.scope: Deactivated successfully.
Oct 10 23:51:52 np0005480824 conmon[281673]: conmon 48443f7ba2e62dae1547 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48443f7ba2e62dae154743b68aed44abd18724392cb380748922d8950fcd6fc8.scope/container/memory.events
Oct 10 23:51:52 np0005480824 podman[281656]: 2025-10-11 03:51:52.227796375 +0000 UTC m=+0.159821514 container died 48443f7ba2e62dae154743b68aed44abd18724392cb380748922d8950fcd6fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.236 2 INFO nova.compute.manager [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.239 2 DEBUG oslo.service.loopingcall [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.239 2 DEBUG nova.compute.manager [-] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.239 2 DEBUG nova.network.neutron [-] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:51:52 np0005480824 systemd[1]: var-lib-containers-storage-overlay-f332ca5ec377f88bd2f177fe51765a15c3339cddf89e8080b0ed9bf4569efdb5-merged.mount: Deactivated successfully.
Oct 10 23:51:52 np0005480824 podman[281656]: 2025-10-11 03:51:52.262233393 +0000 UTC m=+0.194258522 container remove 48443f7ba2e62dae154743b68aed44abd18724392cb380748922d8950fcd6fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:51:52 np0005480824 systemd[1]: libpod-conmon-48443f7ba2e62dae154743b68aed44abd18724392cb380748922d8950fcd6fc8.scope: Deactivated successfully.
Oct 10 23:51:52 np0005480824 podman[281697]: 2025-10-11 03:51:52.50187082 +0000 UTC m=+0.082876697 container create 93a739e238f58cff34c5bf6092cbe35b8dc18c8268254741b04a65881473ff1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:51:52 np0005480824 systemd[1]: Started libpod-conmon-93a739e238f58cff34c5bf6092cbe35b8dc18c8268254741b04a65881473ff1e.scope.
Oct 10 23:51:52 np0005480824 podman[281697]: 2025-10-11 03:51:52.467916543 +0000 UTC m=+0.048922500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:51:52 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:51:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9779119b3d88bdd14e292400dec5f0439bcdc70f339a57fddefe64cc833e2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9779119b3d88bdd14e292400dec5f0439bcdc70f339a57fddefe64cc833e2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9779119b3d88bdd14e292400dec5f0439bcdc70f339a57fddefe64cc833e2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:52 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9779119b3d88bdd14e292400dec5f0439bcdc70f339a57fddefe64cc833e2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:51:52 np0005480824 podman[281697]: 2025-10-11 03:51:52.596292226 +0000 UTC m=+0.177298113 container init 93a739e238f58cff34c5bf6092cbe35b8dc18c8268254741b04a65881473ff1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:51:52 np0005480824 podman[281697]: 2025-10-11 03:51:52.610113941 +0000 UTC m=+0.191119828 container start 93a739e238f58cff34c5bf6092cbe35b8dc18c8268254741b04a65881473ff1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 10 23:51:52 np0005480824 podman[281697]: 2025-10-11 03:51:52.613931821 +0000 UTC m=+0.194937688 container attach 93a739e238f58cff34c5bf6092cbe35b8dc18c8268254741b04a65881473ff1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mcclintock, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:51:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 204 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 812 KiB/s rd, 2.0 MiB/s wr, 276 op/s
Oct 10 23:51:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Oct 10 23:51:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Oct 10 23:51:52 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.865 2 DEBUG nova.network.neutron [-] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.911 2 INFO nova.compute.manager [-] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Took 0.67 seconds to deallocate network for instance.#033[00m
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.957 2 DEBUG oslo_concurrency.lockutils [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:52 np0005480824 nova_compute[260089]: 2025-10-11 03:51:52.958 2 DEBUG oslo_concurrency.lockutils [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:53 np0005480824 nova_compute[260089]: 2025-10-11 03:51:53.070 2 DEBUG oslo_concurrency.processutils [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:51:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:51:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2186584130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:51:53 np0005480824 nova_compute[260089]: 2025-10-11 03:51:53.536 2 DEBUG oslo_concurrency.processutils [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:51:53 np0005480824 nova_compute[260089]: 2025-10-11 03:51:53.546 2 DEBUG nova.compute.provider_tree [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:51:53 np0005480824 nova_compute[260089]: 2025-10-11 03:51:53.609 2 DEBUG nova.scheduler.client.report [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:51:53 np0005480824 nova_compute[260089]: 2025-10-11 03:51:53.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:53 np0005480824 nova_compute[260089]: 2025-10-11 03:51:53.689 2 DEBUG oslo_concurrency.lockutils [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]: {
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "osd_id": 0,
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "type": "bluestore"
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:    },
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "osd_id": 1,
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "type": "bluestore"
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:    },
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "osd_id": 2,
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:        "type": "bluestore"
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]:    }
Oct 10 23:51:53 np0005480824 practical_mcclintock[281714]: }
Oct 10 23:51:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Oct 10 23:51:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Oct 10 23:51:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Oct 10 23:51:53 np0005480824 systemd[1]: libpod-93a739e238f58cff34c5bf6092cbe35b8dc18c8268254741b04a65881473ff1e.scope: Deactivated successfully.
Oct 10 23:51:53 np0005480824 systemd[1]: libpod-93a739e238f58cff34c5bf6092cbe35b8dc18c8268254741b04a65881473ff1e.scope: Consumed 1.132s CPU time.
Oct 10 23:51:53 np0005480824 podman[281697]: 2025-10-11 03:51:53.771908978 +0000 UTC m=+1.352914875 container died 93a739e238f58cff34c5bf6092cbe35b8dc18c8268254741b04a65881473ff1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mcclintock, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:51:53 np0005480824 systemd[1]: var-lib-containers-storage-overlay-da9779119b3d88bdd14e292400dec5f0439bcdc70f339a57fddefe64cc833e2d-merged.mount: Deactivated successfully.
Oct 10 23:51:53 np0005480824 nova_compute[260089]: 2025-10-11 03:51:53.813 2 INFO nova.scheduler.client.report [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Deleted allocations for instance b11faa30-2b52-45e0-b5f2-dd05b5050493#033[00m
Oct 10 23:51:53 np0005480824 podman[281697]: 2025-10-11 03:51:53.846984351 +0000 UTC m=+1.427990248 container remove 93a739e238f58cff34c5bf6092cbe35b8dc18c8268254741b04a65881473ff1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:51:53 np0005480824 systemd[1]: libpod-conmon-93a739e238f58cff34c5bf6092cbe35b8dc18c8268254741b04a65881473ff1e.scope: Deactivated successfully.
Oct 10 23:51:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:51:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:51:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:51:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:51:53 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 0e6d9fe5-b1de-4df3-ac5c-b752255d6869 does not exist
Oct 10 23:51:53 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 4f1a8a1a-72e2-4f86-a9eb-3b5cf2d40e1d does not exist
Oct 10 23:51:54 np0005480824 podman[281783]: 2025-10-11 03:51:54.024751485 +0000 UTC m=+0.073185261 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 10 23:51:54 np0005480824 podman[281782]: 2025-10-11 03:51:54.060914163 +0000 UTC m=+0.115015441 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:51:54 np0005480824 nova_compute[260089]: 2025-10-11 03:51:54.356 2 DEBUG nova.compute.manager [req-fba03f4a-8cc8-4f7d-8432-abc59a529303 req-0506af1f-6ee8-4000-b83b-435e2e67404b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Received event network-vif-plugged-3c111b42-0da1-4752-9b36-2df6a9486510 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:51:54 np0005480824 nova_compute[260089]: 2025-10-11 03:51:54.356 2 DEBUG oslo_concurrency.lockutils [req-fba03f4a-8cc8-4f7d-8432-abc59a529303 req-0506af1f-6ee8-4000-b83b-435e2e67404b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:51:54 np0005480824 nova_compute[260089]: 2025-10-11 03:51:54.356 2 DEBUG oslo_concurrency.lockutils [req-fba03f4a-8cc8-4f7d-8432-abc59a529303 req-0506af1f-6ee8-4000-b83b-435e2e67404b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:51:54 np0005480824 nova_compute[260089]: 2025-10-11 03:51:54.356 2 DEBUG oslo_concurrency.lockutils [req-fba03f4a-8cc8-4f7d-8432-abc59a529303 req-0506af1f-6ee8-4000-b83b-435e2e67404b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:54 np0005480824 nova_compute[260089]: 2025-10-11 03:51:54.357 2 DEBUG nova.compute.manager [req-fba03f4a-8cc8-4f7d-8432-abc59a529303 req-0506af1f-6ee8-4000-b83b-435e2e67404b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] No waiting events found dispatching network-vif-plugged-3c111b42-0da1-4752-9b36-2df6a9486510 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:51:54 np0005480824 nova_compute[260089]: 2025-10-11 03:51:54.357 2 WARNING nova.compute.manager [req-fba03f4a-8cc8-4f7d-8432-abc59a529303 req-0506af1f-6ee8-4000-b83b-435e2e67404b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Received unexpected event network-vif-plugged-3c111b42-0da1-4752-9b36-2df6a9486510 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 23:51:54 np0005480824 nova_compute[260089]: 2025-10-11 03:51:54.357 2 DEBUG nova.compute.manager [req-fba03f4a-8cc8-4f7d-8432-abc59a529303 req-0506af1f-6ee8-4000-b83b-435e2e67404b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Received event network-vif-deleted-3c111b42-0da1-4752-9b36-2df6a9486510 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:51:54 np0005480824 nova_compute[260089]: 2025-10-11 03:51:54.417 2 DEBUG oslo_concurrency.lockutils [None req-1db3e94b-765e-4441-b519-c5f6ee7d5b39 0dd21dcc2e2e4870bd3a6eb5146bc451 69ce475b5af645b7b89607f7ecc196d5 - - default default] Lock "b11faa30-2b52-45e0-b5f2-dd05b5050493" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:51:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:51:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Oct 10 23:51:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Oct 10 23:51:54 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Oct 10 23:51:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 204 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 366 op/s
Oct 10 23:51:54 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:51:54 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:51:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:51:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1354302314' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:51:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:51:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1354302314' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:51:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 204 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 861 KiB/s rd, 2.1 MiB/s wr, 293 op/s
Oct 10 23:51:56 np0005480824 nova_compute[260089]: 2025-10-11 03:51:56.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Oct 10 23:51:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Oct 10 23:51:56 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Oct 10 23:51:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Oct 10 23:51:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Oct 10 23:51:57 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Oct 10 23:51:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:51:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:51:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:51:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:51:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:51:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:51:58 np0005480824 nova_compute[260089]: 2025-10-11 03:51:58.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:51:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 32 KiB/s wr, 117 op/s
Oct 10 23:51:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Oct 10 23:51:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Oct 10 23:51:58 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Oct 10 23:51:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:51:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3745820687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:51:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:51:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3745820687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:51:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:51:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Oct 10 23:51:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Oct 10 23:51:59 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Oct 10 23:52:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 40 KiB/s wr, 145 op/s
Oct 10 23:52:01 np0005480824 nova_compute[260089]: 2025-10-11 03:52:01.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:02 np0005480824 podman[281872]: 2025-10-11 03:52:02.060665984 +0000 UTC m=+0.106034411 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct 10 23:52:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 167 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 35 KiB/s wr, 207 op/s
Oct 10 23:52:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:03Z|00102|binding|INFO|Releasing lport fd35b05a-29b5-4478-aa1a-5883664f9c48 from this chassis (sb_readonly=0)
Oct 10 23:52:03 np0005480824 nova_compute[260089]: 2025-10-11 03:52:03.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:03 np0005480824 nova_compute[260089]: 2025-10-11 03:52:03.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:03 np0005480824 nova_compute[260089]: 2025-10-11 03:52:03.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:52:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Oct 10 23:52:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Oct 10 23:52:04 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Oct 10 23:52:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 167 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 8.7 KiB/s wr, 143 op/s
Oct 10 23:52:06 np0005480824 nova_compute[260089]: 2025-10-11 03:52:06.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 167 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 6.6 KiB/s wr, 108 op/s
Oct 10 23:52:06 np0005480824 nova_compute[260089]: 2025-10-11 03:52:06.678 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154711.6773806, b11faa30-2b52-45e0-b5f2-dd05b5050493 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:52:06 np0005480824 nova_compute[260089]: 2025-10-11 03:52:06.679 2 INFO nova.compute.manager [-] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:52:06 np0005480824 nova_compute[260089]: 2025-10-11 03:52:06.713 2 DEBUG nova.compute.manager [None req-704dfe09-f3f1-4d59-adcb-0f58156c0924 - - - - - -] [instance: b11faa30-2b52-45e0-b5f2-dd05b5050493] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:06 np0005480824 nova_compute[260089]: 2025-10-11 03:52:06.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:08 np0005480824 podman[281898]: 2025-10-11 03:52:08.030873594 +0000 UTC m=+0.084385782 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 10 23:52:08 np0005480824 nova_compute[260089]: 2025-10-11 03:52:08.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 167 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 7.1 KiB/s wr, 103 op/s
Oct 10 23:52:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:52:10 np0005480824 nova_compute[260089]: 2025-10-11 03:52:10.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:10.497 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:10.497 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:10.498 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 167 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 6.4 KiB/s wr, 93 op/s
Oct 10 23:52:11 np0005480824 nova_compute[260089]: 2025-10-11 03:52:11.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:11 np0005480824 nova_compute[260089]: 2025-10-11 03:52:11.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:11 np0005480824 nova_compute[260089]: 2025-10-11 03:52:11.841 2 DEBUG oslo_concurrency.lockutils [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "d22b35e9-badc-40d1-952e-60cdfd60decb" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:11 np0005480824 nova_compute[260089]: 2025-10-11 03:52:11.842 2 DEBUG oslo_concurrency.lockutils [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:11 np0005480824 nova_compute[260089]: 2025-10-11 03:52:11.861 2 DEBUG nova.objects.instance [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lazy-loading 'flavor' on Instance uuid d22b35e9-badc-40d1-952e-60cdfd60decb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:52:11 np0005480824 nova_compute[260089]: 2025-10-11 03:52:11.908 2 DEBUG oslo_concurrency.lockutils [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.122 2 DEBUG oslo_concurrency.lockutils [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "d22b35e9-badc-40d1-952e-60cdfd60decb" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.123 2 DEBUG oslo_concurrency.lockutils [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.123 2 INFO nova.compute.manager [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Attaching volume 0144e423-c58d-4178-a560-ee8d2f9824b2 to /dev/vdb#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.260 2 DEBUG os_brick.utils [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.261 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.272 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.272 676 DEBUG oslo.privsep.daemon [-] privsep: reply[24d568c8-52df-4eb9-a04e-4a4357ccaffc]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.273 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.282 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.283 676 DEBUG oslo.privsep.daemon [-] privsep: reply[0f82bccf-2558-422f-9564-832f9b6aa7ae]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.284 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.292 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.292 676 DEBUG oslo.privsep.daemon [-] privsep: reply[6f79c834-7c85-4caa-8e85-52a948b94e31]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.293 676 DEBUG oslo.privsep.daemon [-] privsep: reply[06fb6ba6-9e0b-4849-b123-e574fca0871d]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.294 2 DEBUG oslo_concurrency.processutils [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.317 2 DEBUG oslo_concurrency.processutils [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.320 2 DEBUG os_brick.initiator.connectors.lightos [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.321 2 DEBUG os_brick.initiator.connectors.lightos [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.321 2 DEBUG os_brick.initiator.connectors.lightos [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.321 2 DEBUG os_brick.utils [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:52:12 np0005480824 nova_compute[260089]: 2025-10-11 03:52:12.322 2 DEBUG nova.virt.block_device [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Updating existing volume attachment record: 06c76984-11b7-437d-859e-783b24f969ec _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:52:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 167 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.2 KiB/s wr, 7 op/s
Oct 10 23:52:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:52:12 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3946122846' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:52:13 np0005480824 nova_compute[260089]: 2025-10-11 03:52:13.080 2 DEBUG nova.objects.instance [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lazy-loading 'flavor' on Instance uuid d22b35e9-badc-40d1-952e-60cdfd60decb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:52:13 np0005480824 nova_compute[260089]: 2025-10-11 03:52:13.115 2 DEBUG nova.virt.libvirt.driver [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Attempting to attach volume 0144e423-c58d-4178-a560-ee8d2f9824b2 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct 10 23:52:13 np0005480824 nova_compute[260089]: 2025-10-11 03:52:13.120 2 DEBUG nova.virt.libvirt.guest [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] attach device xml: <disk type="network" device="disk">
Oct 10 23:52:13 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:52:13 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-0144e423-c58d-4178-a560-ee8d2f9824b2">
Oct 10 23:52:13 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:52:13 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:52:13 np0005480824 nova_compute[260089]:  <auth username="openstack">
Oct 10 23:52:13 np0005480824 nova_compute[260089]:    <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:52:13 np0005480824 nova_compute[260089]:  </auth>
Oct 10 23:52:13 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:52:13 np0005480824 nova_compute[260089]:  <serial>0144e423-c58d-4178-a560-ee8d2f9824b2</serial>
Oct 10 23:52:13 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:52:13 np0005480824 nova_compute[260089]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 10 23:52:13 np0005480824 nova_compute[260089]: 2025-10-11 03:52:13.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:13 np0005480824 nova_compute[260089]: 2025-10-11 03:52:13.253 2 DEBUG nova.virt.libvirt.driver [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:52:13 np0005480824 nova_compute[260089]: 2025-10-11 03:52:13.253 2 DEBUG nova.virt.libvirt.driver [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:52:13 np0005480824 nova_compute[260089]: 2025-10-11 03:52:13.253 2 DEBUG nova.virt.libvirt.driver [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:52:13 np0005480824 nova_compute[260089]: 2025-10-11 03:52:13.254 2 DEBUG nova.virt.libvirt.driver [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No VIF found with MAC fa:16:3e:10:2b:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:52:13 np0005480824 nova_compute[260089]: 2025-10-11 03:52:13.450 2 DEBUG oslo_concurrency.lockutils [None req-b18ac589-d167-488a-9308-29dc0842a9ac d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:13 np0005480824 nova_compute[260089]: 2025-10-11 03:52:13.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:14 np0005480824 nova_compute[260089]: 2025-10-11 03:52:14.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:52:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 167 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.2 KiB/s wr, 7 op/s
Oct 10 23:52:16 np0005480824 nova_compute[260089]: 2025-10-11 03:52:16.523 2 DEBUG oslo_concurrency.lockutils [None req-9a03ebe0-c1db-415e-b6c0-e8c293a3f450 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "d22b35e9-badc-40d1-952e-60cdfd60decb" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:16 np0005480824 nova_compute[260089]: 2025-10-11 03:52:16.524 2 DEBUG oslo_concurrency.lockutils [None req-9a03ebe0-c1db-415e-b6c0-e8c293a3f450 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:16 np0005480824 nova_compute[260089]: 2025-10-11 03:52:16.540 2 INFO nova.compute.manager [None req-9a03ebe0-c1db-415e-b6c0-e8c293a3f450 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Detaching volume 0144e423-c58d-4178-a560-ee8d2f9824b2#033[00m
Oct 10 23:52:16 np0005480824 nova_compute[260089]: 2025-10-11 03:52:16.665 2 INFO nova.virt.block_device [None req-9a03ebe0-c1db-415e-b6c0-e8c293a3f450 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Attempting to driver detach volume 0144e423-c58d-4178-a560-ee8d2f9824b2 from mountpoint /dev/vdb#033[00m
Oct 10 23:52:16 np0005480824 nova_compute[260089]: 2025-10-11 03:52:16.675 2 DEBUG nova.virt.libvirt.driver [None req-9a03ebe0-c1db-415e-b6c0-e8c293a3f450 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Attempting to detach device vdb from instance d22b35e9-badc-40d1-952e-60cdfd60decb from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 10 23:52:16 np0005480824 nova_compute[260089]: 2025-10-11 03:52:16.676 2 DEBUG nova.virt.libvirt.guest [None req-9a03ebe0-c1db-415e-b6c0-e8c293a3f450 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:52:16 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:52:16 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-0144e423-c58d-4178-a560-ee8d2f9824b2">
Oct 10 23:52:16 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:52:16 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:52:16 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:52:16 np0005480824 nova_compute[260089]:  <serial>0144e423-c58d-4178-a560-ee8d2f9824b2</serial>
Oct 10 23:52:16 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:52:16 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:52:16 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:52:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 167 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1023 B/s wr, 6 op/s
Oct 10 23:52:16 np0005480824 nova_compute[260089]: 2025-10-11 03:52:16.688 2 INFO nova.virt.libvirt.driver [None req-9a03ebe0-c1db-415e-b6c0-e8c293a3f450 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Successfully detached device vdb from instance d22b35e9-badc-40d1-952e-60cdfd60decb from the persistent domain config.#033[00m
Oct 10 23:52:16 np0005480824 nova_compute[260089]: 2025-10-11 03:52:16.689 2 DEBUG nova.virt.libvirt.driver [None req-9a03ebe0-c1db-415e-b6c0-e8c293a3f450 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d22b35e9-badc-40d1-952e-60cdfd60decb from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 10 23:52:16 np0005480824 nova_compute[260089]: 2025-10-11 03:52:16.690 2 DEBUG nova.virt.libvirt.guest [None req-9a03ebe0-c1db-415e-b6c0-e8c293a3f450 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:52:16 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:52:16 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-0144e423-c58d-4178-a560-ee8d2f9824b2">
Oct 10 23:52:16 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:52:16 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:52:16 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:52:16 np0005480824 nova_compute[260089]:  <serial>0144e423-c58d-4178-a560-ee8d2f9824b2</serial>
Oct 10 23:52:16 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:52:16 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:52:16 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:52:16 np0005480824 nova_compute[260089]: 2025-10-11 03:52:16.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:16 np0005480824 nova_compute[260089]: 2025-10-11 03:52:16.820 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Received event <DeviceRemovedEvent: 1760154736.8199735, d22b35e9-badc-40d1-952e-60cdfd60decb => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 10 23:52:16 np0005480824 nova_compute[260089]: 2025-10-11 03:52:16.822 2 DEBUG nova.virt.libvirt.driver [None req-9a03ebe0-c1db-415e-b6c0-e8c293a3f450 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d22b35e9-badc-40d1-952e-60cdfd60decb _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 10 23:52:16 np0005480824 nova_compute[260089]: 2025-10-11 03:52:16.825 2 INFO nova.virt.libvirt.driver [None req-9a03ebe0-c1db-415e-b6c0-e8c293a3f450 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Successfully detached device vdb from instance d22b35e9-badc-40d1-952e-60cdfd60decb from the live domain config.#033[00m
Oct 10 23:52:17 np0005480824 nova_compute[260089]: 2025-10-11 03:52:17.009 2 DEBUG nova.objects.instance [None req-9a03ebe0-c1db-415e-b6c0-e8c293a3f450 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lazy-loading 'flavor' on Instance uuid d22b35e9-badc-40d1-952e-60cdfd60decb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:52:17 np0005480824 nova_compute[260089]: 2025-10-11 03:52:17.049 2 DEBUG oslo_concurrency.lockutils [None req-9a03ebe0-c1db-415e-b6c0-e8c293a3f450 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:17 np0005480824 nova_compute[260089]: 2025-10-11 03:52:17.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:18 np0005480824 nova_compute[260089]: 2025-10-11 03:52:18.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 169 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 194 KiB/s wr, 15 op/s
Oct 10 23:52:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Oct 10 23:52:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Oct 10 23:52:18 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Oct 10 23:52:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:52:19 np0005480824 nova_compute[260089]: 2025-10-11 03:52:19.741 2 DEBUG nova.compute.manager [None req-67ade1a6-ee63-4cb7-9bd2-a5015197e668 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:19 np0005480824 nova_compute[260089]: 2025-10-11 03:52:19.791 2 INFO nova.compute.manager [None req-67ade1a6-ee63-4cb7-9bd2-a5015197e668 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] instance snapshotting#033[00m
Oct 10 23:52:19 np0005480824 nova_compute[260089]: 2025-10-11 03:52:19.970 2 INFO nova.virt.libvirt.driver [None req-67ade1a6-ee63-4cb7-9bd2-a5015197e668 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Beginning live snapshot process#033[00m
Oct 10 23:52:20 np0005480824 nova_compute[260089]: 2025-10-11 03:52:20.133 2 DEBUG nova.virt.libvirt.imagebackend [None req-67ade1a6-ee63-4cb7-9bd2-a5015197e668 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No parent info for 7caca022-7dcc-40a9-8bd8-eb7d91b29390; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct 10 23:52:20 np0005480824 nova_compute[260089]: 2025-10-11 03:52:20.303 2 DEBUG nova.storage.rbd_utils [None req-67ade1a6-ee63-4cb7-9bd2-a5015197e668 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] creating snapshot(ed51767bcf734b87b37480577c3daec7) on rbd image(d22b35e9-badc-40d1-952e-60cdfd60decb_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 10 23:52:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 169 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 232 KiB/s wr, 11 op/s
Oct 10 23:52:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Oct 10 23:52:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Oct 10 23:52:20 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Oct 10 23:52:20 np0005480824 nova_compute[260089]: 2025-10-11 03:52:20.823 2 DEBUG nova.storage.rbd_utils [None req-67ade1a6-ee63-4cb7-9bd2-a5015197e668 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] cloning vms/d22b35e9-badc-40d1-952e-60cdfd60decb_disk@ed51767bcf734b87b37480577c3daec7 to images/bb54f500-8a3d-4161-bee0-566f2411c985 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 10 23:52:20 np0005480824 nova_compute[260089]: 2025-10-11 03:52:20.965 2 DEBUG nova.storage.rbd_utils [None req-67ade1a6-ee63-4cb7-9bd2-a5015197e668 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] flattening images/bb54f500-8a3d-4161-bee0-566f2411c985 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct 10 23:52:21 np0005480824 nova_compute[260089]: 2025-10-11 03:52:21.401 2 DEBUG nova.storage.rbd_utils [None req-67ade1a6-ee63-4cb7-9bd2-a5015197e668 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] removing snapshot(ed51767bcf734b87b37480577c3daec7) on rbd image(d22b35e9-badc-40d1-952e-60cdfd60decb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct 10 23:52:21 np0005480824 nova_compute[260089]: 2025-10-11 03:52:21.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Oct 10 23:52:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Oct 10 23:52:21 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Oct 10 23:52:21 np0005480824 nova_compute[260089]: 2025-10-11 03:52:21.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:21 np0005480824 nova_compute[260089]: 2025-10-11 03:52:21.858 2 DEBUG nova.storage.rbd_utils [None req-67ade1a6-ee63-4cb7-9bd2-a5015197e668 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] creating snapshot(snap) on rbd image(bb54f500-8a3d-4161-bee0-566f2411c985) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct 10 23:52:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 248 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 8.2 MiB/s wr, 186 op/s
Oct 10 23:52:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Oct 10 23:52:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Oct 10 23:52:22 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Oct 10 23:52:23 np0005480824 nova_compute[260089]: 2025-10-11 03:52:23.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:24 np0005480824 nova_compute[260089]: 2025-10-11 03:52:24.171 2 INFO nova.virt.libvirt.driver [None req-67ade1a6-ee63-4cb7-9bd2-a5015197e668 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Snapshot image upload complete#033[00m
Oct 10 23:52:24 np0005480824 nova_compute[260089]: 2025-10-11 03:52:24.171 2 INFO nova.compute.manager [None req-67ade1a6-ee63-4cb7-9bd2-a5015197e668 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Took 4.38 seconds to snapshot the instance on the hypervisor.#033[00m
Oct 10 23:52:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:52:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1046544244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:52:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:52:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1046544244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:52:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:52:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 248 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.9 MiB/s wr, 170 op/s
Oct 10 23:52:25 np0005480824 podman[282088]: 2025-10-11 03:52:25.060919028 +0000 UTC m=+0.098711578 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 23:52:25 np0005480824 podman[282087]: 2025-10-11 03:52:25.06098535 +0000 UTC m=+0.103525152 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:52:25 np0005480824 nova_compute[260089]: 2025-10-11 03:52:25.808 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "92d928f8-a506-405f-8aa4-6d539d08e00f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:25 np0005480824 nova_compute[260089]: 2025-10-11 03:52:25.808 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:25 np0005480824 nova_compute[260089]: 2025-10-11 03:52:25.823 2 DEBUG nova.compute.manager [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:52:25 np0005480824 nova_compute[260089]: 2025-10-11 03:52:25.913 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:25 np0005480824 nova_compute[260089]: 2025-10-11 03:52:25.914 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:25 np0005480824 nova_compute[260089]: 2025-10-11 03:52:25.924 2 DEBUG nova.virt.hardware [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:52:25 np0005480824 nova_compute[260089]: 2025-10-11 03:52:25.924 2 INFO nova.compute.claims [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.039 2 DEBUG oslo_concurrency.processutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:52:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2316132483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.523 2 DEBUG oslo_concurrency.processutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.533 2 DEBUG nova.compute.provider_tree [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.552 2 DEBUG nova.scheduler.client.report [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.577 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.578 2 DEBUG nova.compute.manager [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.593 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "3b8741f5-afdc-4745-b74c-2578bc643be4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.593 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.626 2 DEBUG nova.compute.manager [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.636 2 DEBUG nova.compute.manager [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.636 2 DEBUG nova.network.neutron [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.668 2 INFO nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:52:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 248 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 168 op/s
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.695 2 DEBUG nova.compute.manager [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.721 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.722 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.731 2 DEBUG nova.virt.hardware [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.732 2 INFO nova.compute.claims [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.740 2 INFO nova.virt.block_device [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Booting with volume a382789e-1a62-4951-a169-d3f8be45d9b9 at /dev/vda#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.848 2 DEBUG nova.policy [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38ebc503771e417aaf1f3aea0c835994', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55d21391a321476eb133317b3402b0f0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.894 2 DEBUG oslo_concurrency.processutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.926 2 DEBUG os_brick.utils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.928 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.951 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.951 676 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8586e6-f68b-43de-90e8-40e3da06e0e1]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.952 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.966 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.967 676 DEBUG oslo.privsep.daemon [-] privsep: reply[b885b79e-7733-4307-8643-b74a7bf2dd19]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.969 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.984 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.984 676 DEBUG oslo.privsep.daemon [-] privsep: reply[0d473743-4561-4d8e-a186-a4b169c9a800]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.987 676 DEBUG oslo.privsep.daemon [-] privsep: reply[cea4fcb9-97f9-4935-a800-e3c1aa4b9bec]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:26 np0005480824 nova_compute[260089]: 2025-10-11 03:52:26.988 2 DEBUG oslo_concurrency.processutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.028 2 DEBUG oslo_concurrency.processutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.034 2 DEBUG os_brick.initiator.connectors.lightos [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.035 2 DEBUG os_brick.initiator.connectors.lightos [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.036 2 DEBUG os_brick.initiator.connectors.lightos [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.037 2 DEBUG os_brick.utils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] <== get_connector_properties: return (109ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.038 2 DEBUG nova.virt.block_device [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Updating existing volume attachment record: 500e4fbb-8ee4-4cdf-855c-341830d7af9d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:52:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:52:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2140186565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.347 2 DEBUG oslo_concurrency.processutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.357 2 DEBUG nova.compute.provider_tree [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.375 2 DEBUG nova.scheduler.client.report [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.403 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.404 2 DEBUG nova.compute.manager [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.466 2 DEBUG nova.compute.manager [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.467 2 DEBUG nova.network.neutron [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.487 2 INFO nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.505 2 DEBUG nova.compute.manager [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.521 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Acquiring lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.524 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.527 2 DEBUG nova.network.neutron [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Successfully created port: 1c319219-245c-424e-8e32-c111069f8e63 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.559 2 DEBUG nova.compute.manager [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.622 2 DEBUG nova.compute.manager [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.623 2 DEBUG nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.624 2 INFO nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Creating image(s)#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.649 2 DEBUG nova.storage.rbd_utils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] rbd image 3b8741f5-afdc-4745-b74c-2578bc643be4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.679 2 DEBUG nova.storage.rbd_utils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] rbd image 3b8741f5-afdc-4745-b74c-2578bc643be4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.707 2 DEBUG nova.storage.rbd_utils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] rbd image 3b8741f5-afdc-4745-b74c-2578bc643be4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.712 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "8e6d9f979dff0a620ee3b5b31ed9b68fcfce95ea" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.713 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "8e6d9f979dff0a620ee3b5b31ed9b68fcfce95ea" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.718 2 DEBUG nova.policy [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd6596329d9c842b78638fdbcf50b8ec8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '944395b4a11c4a9182fda518dc7bd2d8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:52:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:52:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2870867712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.734 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.734 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.742 2 DEBUG nova.virt.hardware [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.742 2 INFO nova.compute.claims [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.879 2 DEBUG oslo_concurrency.processutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:52:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:52:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:52:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:52:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:52:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:52:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:52:27
Oct 10 23:52:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:52:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:52:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'vms']
Oct 10 23:52:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:52:27 np0005480824 nova_compute[260089]: 2025-10-11 03:52:27.987 2 DEBUG nova.virt.libvirt.imagebackend [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Image locations are: [{'url': 'rbd://92cfe4d4-4917-5be1-9d00-73758793a62b/images/bb54f500-8a3d-4161-bee0-566f2411c985/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://92cfe4d4-4917-5be1-9d00-73758793a62b/images/bb54f500-8a3d-4161-bee0-566f2411c985/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.059 2 DEBUG nova.virt.libvirt.imagebackend [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Selected location: {'url': 'rbd://92cfe4d4-4917-5be1-9d00-73758793a62b/images/bb54f500-8a3d-4161-bee0-566f2411c985/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.060 2 DEBUG nova.storage.rbd_utils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] cloning images/bb54f500-8a3d-4161-bee0-566f2411c985@snap to None/3b8741f5-afdc-4745-b74c-2578bc643be4_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.148 2 DEBUG nova.compute.manager [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.150 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.151 2 INFO nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Creating image(s)#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.152 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.153 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Ensure instance console log exists: /var/lib/nova/instances/92d928f8-a506-405f-8aa4-6d539d08e00f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.154 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.154 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.155 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:52:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:52:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:52:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:52:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:52:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:52:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:52:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:52:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:52:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.216 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "8e6d9f979dff0a620ee3b5b31ed9b68fcfce95ea" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.318 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:52:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:52:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3608637669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.374 2 DEBUG nova.network.neutron [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Successfully updated port: 1c319219-245c-424e-8e32-c111069f8e63 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.377 2 DEBUG oslo_concurrency.processutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.384 2 DEBUG nova.objects.instance [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lazy-loading 'migration_context' on Instance uuid 3b8741f5-afdc-4745-b74c-2578bc643be4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.388 2 DEBUG nova.compute.provider_tree [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.391 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "refresh_cache-92d928f8-a506-405f-8aa4-6d539d08e00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.391 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquired lock "refresh_cache-92d928f8-a506-405f-8aa4-6d539d08e00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.391 2 DEBUG nova.network.neutron [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.410 2 DEBUG nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.411 2 DEBUG nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Ensure instance console log exists: /var/lib/nova/instances/3b8741f5-afdc-4745-b74c-2578bc643be4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.411 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.411 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.412 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.414 2 DEBUG nova.scheduler.client.report [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.427 2 DEBUG nova.compute.manager [req-0985e4c5-54af-4f01-a827-f7b09da4d916 req-8fa3cfb6-37f7-4d92-9579-4f3610618cde 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Received event network-changed-1c319219-245c-424e-8e32-c111069f8e63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.428 2 DEBUG nova.compute.manager [req-0985e4c5-54af-4f01-a827-f7b09da4d916 req-8fa3cfb6-37f7-4d92-9579-4f3610618cde 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Refreshing instance network info cache due to event network-changed-1c319219-245c-424e-8e32-c111069f8e63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.428 2 DEBUG oslo_concurrency.lockutils [req-0985e4c5-54af-4f01-a827-f7b09da4d916 req-8fa3cfb6-37f7-4d92-9579-4f3610618cde 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-92d928f8-a506-405f-8aa4-6d539d08e00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.439 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.440 2 DEBUG nova.compute.manager [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.481 2 DEBUG nova.compute.manager [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.482 2 DEBUG nova.network.neutron [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.502 2 INFO nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.528 2 DEBUG nova.compute.manager [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.630 2 DEBUG nova.compute.manager [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.632 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.632 2 INFO nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Creating image(s)#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.654 2 DEBUG nova.storage.rbd_utils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] rbd image 7452e9a5-0e1b-4c0c-816b-57e0ea976747_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.678 2 DEBUG nova.storage.rbd_utils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] rbd image 7452e9a5-0e1b-4c0c-816b-57e0ea976747_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 248 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.9 MiB/s wr, 169 op/s
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.702 2 DEBUG nova.storage.rbd_utils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] rbd image 7452e9a5-0e1b-4c0c-816b-57e0ea976747_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.714 2 DEBUG oslo_concurrency.processutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.772 2 DEBUG nova.network.neutron [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.830 2 DEBUG nova.policy [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cde6845b6b8d482b95a72e38b1db93d3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a65cd418eaad4366991b123d6535a576', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.833 2 DEBUG nova.network.neutron [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Successfully created port: 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.836 2 DEBUG oslo_concurrency.processutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.837 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.838 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.838 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.861 2 DEBUG nova.storage.rbd_utils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] rbd image 7452e9a5-0e1b-4c0c-816b-57e0ea976747_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:28 np0005480824 nova_compute[260089]: 2025-10-11 03:52:28.865 2 DEBUG oslo_concurrency.processutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 7452e9a5-0e1b-4c0c-816b-57e0ea976747_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:29 np0005480824 nova_compute[260089]: 2025-10-11 03:52:29.134 2 DEBUG oslo_concurrency.processutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 7452e9a5-0e1b-4c0c-816b-57e0ea976747_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:29 np0005480824 nova_compute[260089]: 2025-10-11 03:52:29.214 2 DEBUG nova.storage.rbd_utils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] resizing rbd image 7452e9a5-0e1b-4c0c-816b-57e0ea976747_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 23:52:29 np0005480824 nova_compute[260089]: 2025-10-11 03:52:29.326 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:52:29 np0005480824 nova_compute[260089]: 2025-10-11 03:52:29.333 2 DEBUG nova.objects.instance [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lazy-loading 'migration_context' on Instance uuid 7452e9a5-0e1b-4c0c-816b-57e0ea976747 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:52:29 np0005480824 nova_compute[260089]: 2025-10-11 03:52:29.347 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:52:29 np0005480824 nova_compute[260089]: 2025-10-11 03:52:29.348 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Ensure instance console log exists: /var/lib/nova/instances/7452e9a5-0e1b-4c0c-816b-57e0ea976747/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:52:29 np0005480824 nova_compute[260089]: 2025-10-11 03:52:29.349 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:29 np0005480824 nova_compute[260089]: 2025-10-11 03:52:29.349 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:29 np0005480824 nova_compute[260089]: 2025-10-11 03:52:29.349 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:52:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Oct 10 23:52:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Oct 10 23:52:29 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.366 2 DEBUG nova.network.neutron [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Successfully updated port: 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.390 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "refresh_cache-3b8741f5-afdc-4745-b74c-2578bc643be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.391 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquired lock "refresh_cache-3b8741f5-afdc-4745-b74c-2578bc643be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.391 2 DEBUG nova.network.neutron [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.633 2 DEBUG nova.network.neutron [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Updating instance_info_cache with network_info: [{"id": "1c319219-245c-424e-8e32-c111069f8e63", "address": "fa:16:3e:00:71:3d", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c319219-24", "ovs_interfaceid": "1c319219-245c-424e-8e32-c111069f8e63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.665 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Releasing lock "refresh_cache-92d928f8-a506-405f-8aa4-6d539d08e00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.667 2 DEBUG nova.compute.manager [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Instance network_info: |[{"id": "1c319219-245c-424e-8e32-c111069f8e63", "address": "fa:16:3e:00:71:3d", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c319219-24", "ovs_interfaceid": "1c319219-245c-424e-8e32-c111069f8e63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.669 2 DEBUG oslo_concurrency.lockutils [req-0985e4c5-54af-4f01-a827-f7b09da4d916 req-8fa3cfb6-37f7-4d92-9579-4f3610618cde 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-92d928f8-a506-405f-8aa4-6d539d08e00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.670 2 DEBUG nova.network.neutron [req-0985e4c5-54af-4f01-a827-f7b09da4d916 req-8fa3cfb6-37f7-4d92-9579-4f3610618cde 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Refreshing network info cache for port 1c319219-245c-424e-8e32-c111069f8e63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.678 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Start _get_guest_xml network_info=[{"id": "1c319219-245c-424e-8e32-c111069f8e63", "address": "fa:16:3e:00:71:3d", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c319219-24", "ovs_interfaceid": "1c319219-245c-424e-8e32-c111069f8e63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '500e4fbb-8ee4-4cdf-855c-341830d7af9d', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a382789e-1a62-4951-a169-d3f8be45d9b9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a382789e-1a62-4951-a169-d3f8be45d9b9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '92d928f8-a506-405f-8aa4-6d539d08e00f', 'attached_at': '', 'detached_at': '', 'volume_id': 'a382789e-1a62-4951-a169-d3f8be45d9b9', 'serial': 'a382789e-1a62-4951-a169-d3f8be45d9b9'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:52:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 248 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.6 KiB/s wr, 41 op/s
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.687 2 WARNING nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.695 2 DEBUG nova.virt.libvirt.host [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.696 2 DEBUG nova.virt.libvirt.host [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.703 2 DEBUG nova.virt.libvirt.host [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.703 2 DEBUG nova.virt.libvirt.host [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.704 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.704 2 DEBUG nova.virt.hardware [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.705 2 DEBUG nova.virt.hardware [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.705 2 DEBUG nova.virt.hardware [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.705 2 DEBUG nova.virt.hardware [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.705 2 DEBUG nova.virt.hardware [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.706 2 DEBUG nova.virt.hardware [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.706 2 DEBUG nova.virt.hardware [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.706 2 DEBUG nova.virt.hardware [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.707 2 DEBUG nova.virt.hardware [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.707 2 DEBUG nova.virt.hardware [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.707 2 DEBUG nova.virt.hardware [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.738 2 DEBUG nova.storage.rbd_utils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 92d928f8-a506-405f-8aa4-6d539d08e00f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:30 np0005480824 nova_compute[260089]: 2025-10-11 03:52:30.743 2 DEBUG oslo_concurrency.processutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:52:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1735552534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.187 2 DEBUG oslo_concurrency.processutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.314 2 DEBUG os_brick.encryptors [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Using volume encryption metadata '{'encryption_key_id': 'c742ecb9-d936-4776-8786-5945b2c44006', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a382789e-1a62-4951-a169-d3f8be45d9b9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a382789e-1a62-4951-a169-d3f8be45d9b9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '92d928f8-a506-405f-8aa4-6d539d08e00f', 'attached_at': '', 'detached_at': '', 'volume_id': 'a382789e-1a62-4951-a169-d3f8be45d9b9', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.318 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.339 2 DEBUG barbicanclient.v1.secrets [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/c742ecb9-d936-4776-8786-5945b2c44006 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.340 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.370 2 DEBUG nova.network.neutron [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.376 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.377 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.385 2 DEBUG nova.network.neutron [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Successfully created port: 5af98ddd-2cff-4fe8-abcf-414110faa17d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.399 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.400 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.434 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.435 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.481 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.482 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.526 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.528 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.578 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.579 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.599 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.600 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.631 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.632 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.681 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.682 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.709 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.710 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.746 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.748 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.776 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.777 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.784 2 DEBUG nova.compute.manager [req-dbd04d76-4466-4723-a69e-f1b2de3f5efe req-f80e8cf9-14d8-42d6-a8ce-886d66ded44a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Received event network-changed-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.785 2 DEBUG nova.compute.manager [req-dbd04d76-4466-4723-a69e-f1b2de3f5efe req-f80e8cf9-14d8-42d6-a8ce-886d66ded44a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Refreshing instance network info cache due to event network-changed-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.785 2 DEBUG oslo_concurrency.lockutils [req-dbd04d76-4466-4723-a69e-f1b2de3f5efe req-f80e8cf9-14d8-42d6-a8ce-886d66ded44a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-3b8741f5-afdc-4745-b74c-2578bc643be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.803 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.804 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.824 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.825 2 INFO barbicanclient.base [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Calculated Secrets uuid ref: secrets/c742ecb9-d936-4776-8786-5945b2c44006#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.852 2 DEBUG barbicanclient.client [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.853 2 DEBUG nova.virt.libvirt.host [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  <usage type="volume">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <volume>a382789e-1a62-4951-a169-d3f8be45d9b9</volume>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  </usage>
Oct 10 23:52:31 np0005480824 nova_compute[260089]: </secret>
Oct 10 23:52:31 np0005480824 nova_compute[260089]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.902 2 DEBUG nova.virt.libvirt.vif [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-32240042',display_name='tempest-TestVolumeBootPattern-server-32240042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-32240042',id=11,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-4r50w6ew',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:52:26Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=92d928f8-a506-405f-8aa4-6d539d08e00f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c319219-245c-424e-8e32-c111069f8e63", "address": "fa:16:3e:00:71:3d", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c319219-24", "ovs_interfaceid": "1c319219-245c-424e-8e32-c111069f8e63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.904 2 DEBUG nova.network.os_vif_util [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "1c319219-245c-424e-8e32-c111069f8e63", "address": "fa:16:3e:00:71:3d", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c319219-24", "ovs_interfaceid": "1c319219-245c-424e-8e32-c111069f8e63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.906 2 DEBUG nova.network.os_vif_util [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:71:3d,bridge_name='br-int',has_traffic_filtering=True,id=1c319219-245c-424e-8e32-c111069f8e63,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c319219-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.909 2 DEBUG nova.objects.instance [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 92d928f8-a506-405f-8aa4-6d539d08e00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.931 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  <uuid>92d928f8-a506-405f-8aa4-6d539d08e00f</uuid>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  <name>instance-0000000b</name>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <nova:name>tempest-TestVolumeBootPattern-server-32240042</nova:name>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:52:30</nova:creationTime>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:        <nova:user uuid="38ebc503771e417aaf1f3aea0c835994">tempest-TestVolumeBootPattern-739984652-project-member</nova:user>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:        <nova:project uuid="55d21391a321476eb133317b3402b0f0">tempest-TestVolumeBootPattern-739984652</nova:project>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:        <nova:port uuid="1c319219-245c-424e-8e32-c111069f8e63">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <entry name="serial">92d928f8-a506-405f-8aa4-6d539d08e00f</entry>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <entry name="uuid">92d928f8-a506-405f-8aa4-6d539d08e00f</entry>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/92d928f8-a506-405f-8aa4-6d539d08e00f_disk.config">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-a382789e-1a62-4951-a169-d3f8be45d9b9">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <serial>a382789e-1a62-4951-a169-d3f8be45d9b9</serial>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <encryption format="luks">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:        <secret type="passphrase" uuid="464b8b74-e533-4d77-91a5-c37e71a60e26"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      </encryption>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:00:71:3d"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <target dev="tap1c319219-24"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/92d928f8-a506-405f-8aa4-6d539d08e00f/console.log" append="off"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:52:31 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:52:31 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:52:31 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:52:31 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.932 2 DEBUG nova.compute.manager [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Preparing to wait for external event network-vif-plugged-1c319219-245c-424e-8e32-c111069f8e63 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.933 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.933 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.933 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.934 2 DEBUG nova.virt.libvirt.vif [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-32240042',display_name='tempest-TestVolumeBootPattern-server-32240042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-32240042',id=11,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-4r50w6ew',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:52:26Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=92d928f8-a506-405f-8aa4-6d539d08e00f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c319219-245c-424e-8e32-c111069f8e63", "address": "fa:16:3e:00:71:3d", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c319219-24", "ovs_interfaceid": "1c319219-245c-424e-8e32-c111069f8e63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.934 2 DEBUG nova.network.os_vif_util [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "1c319219-245c-424e-8e32-c111069f8e63", "address": "fa:16:3e:00:71:3d", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c319219-24", "ovs_interfaceid": "1c319219-245c-424e-8e32-c111069f8e63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.935 2 DEBUG nova.network.os_vif_util [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:71:3d,bridge_name='br-int',has_traffic_filtering=True,id=1c319219-245c-424e-8e32-c111069f8e63,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c319219-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.935 2 DEBUG os_vif [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:71:3d,bridge_name='br-int',has_traffic_filtering=True,id=1c319219-245c-424e-8e32-c111069f8e63,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c319219-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.936 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.936 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c319219-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.941 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c319219-24, col_values=(('external_ids', {'iface-id': '1c319219-245c-424e-8e32-c111069f8e63', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:71:3d', 'vm-uuid': '92d928f8-a506-405f-8aa4-6d539d08e00f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:31 np0005480824 NetworkManager[44969]: <info>  [1760154751.9442] manager: (tap1c319219-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:31 np0005480824 nova_compute[260089]: 2025-10-11 03:52:31.952 2 INFO os_vif [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:71:3d,bridge_name='br-int',has_traffic_filtering=True,id=1c319219-245c-424e-8e32-c111069f8e63,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c319219-24')#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.013 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.014 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.014 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No VIF found with MAC fa:16:3e:00:71:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.014 2 INFO nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Using config drive#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.038 2 DEBUG nova.storage.rbd_utils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 92d928f8-a506-405f-8aa4-6d539d08e00f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.294 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.295 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.296 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.296 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.333 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.333 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.334 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.553 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.553 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquired lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.553 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.554 2 DEBUG nova.objects.instance [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d22b35e9-badc-40d1-952e-60cdfd60decb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:52:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 295 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 2.2 MiB/s wr, 99 op/s
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.804 2 INFO nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Creating config drive at /var/lib/nova/instances/92d928f8-a506-405f-8aa4-6d539d08e00f/disk.config#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.808 2 DEBUG oslo_concurrency.processutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92d928f8-a506-405f-8aa4-6d539d08e00f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5x905wrq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.952 2 DEBUG oslo_concurrency.processutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92d928f8-a506-405f-8aa4-6d539d08e00f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5x905wrq" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.976 2 DEBUG nova.storage.rbd_utils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 92d928f8-a506-405f-8aa4-6d539d08e00f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:32 np0005480824 nova_compute[260089]: 2025-10-11 03:52:32.980 2 DEBUG oslo_concurrency.processutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92d928f8-a506-405f-8aa4-6d539d08e00f/disk.config 92d928f8-a506-405f-8aa4-6d539d08e00f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:33 np0005480824 podman[282605]: 2025-10-11 03:52:33.025286808 +0000 UTC m=+0.080034460 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.111 2 DEBUG oslo_concurrency.processutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92d928f8-a506-405f-8aa4-6d539d08e00f/disk.config 92d928f8-a506-405f-8aa4-6d539d08e00f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.112 2 INFO nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Deleting local config drive /var/lib/nova/instances/92d928f8-a506-405f-8aa4-6d539d08e00f/disk.config because it was imported into RBD.#033[00m
Oct 10 23:52:33 np0005480824 kernel: tap1c319219-24: entered promiscuous mode
Oct 10 23:52:33 np0005480824 NetworkManager[44969]: <info>  [1760154753.1596] manager: (tap1c319219-24): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Oct 10 23:52:33 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:33Z|00103|binding|INFO|Claiming lport 1c319219-245c-424e-8e32-c111069f8e63 for this chassis.
Oct 10 23:52:33 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:33Z|00104|binding|INFO|1c319219-245c-424e-8e32-c111069f8e63: Claiming fa:16:3e:00:71:3d 10.100.0.9
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.167 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:71:3d 10.100.0.9'], port_security=['fa:16:3e:00:71:3d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '92d928f8-a506-405f-8aa4-6d539d08e00f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d21391a321476eb133317b3402b0f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fbe6632-cce1-48fb-95c1-bed1096fc071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d98e64fb-092d-4777-b741-426f3e849bc3, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=1c319219-245c-424e-8e32-c111069f8e63) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.168 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 1c319219-245c-424e-8e32-c111069f8e63 in datapath 359720eb-a957-4bcd-b9b2-3cf7dad947e4 bound to our chassis#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.170 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 359720eb-a957-4bcd-b9b2-3cf7dad947e4#033[00m
Oct 10 23:52:33 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:33Z|00105|binding|INFO|Setting lport 1c319219-245c-424e-8e32-c111069f8e63 ovn-installed in OVS
Oct 10 23:52:33 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:33Z|00106|binding|INFO|Setting lport 1c319219-245c-424e-8e32-c111069f8e63 up in Southbound
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:33 np0005480824 systemd-udevd[282679]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.189 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1f32bcd7-fdf2-42d4-9c58-aca5d71ea3a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.189 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap359720eb-a1 in ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.192 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap359720eb-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.193 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0c65b4-fafe-411a-8cfe-c050cbdb6f2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.193 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf54d75-0cea-4e51-bfb5-4a6fd35b4a07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 NetworkManager[44969]: <info>  [1760154753.2053] device (tap1c319219-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:52:33 np0005480824 NetworkManager[44969]: <info>  [1760154753.2068] device (tap1c319219-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.214 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[192cf367-d7de-4651-a206-c50415eadf3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 systemd-machined[215071]: New machine qemu-11-instance-0000000b.
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.221 2 DEBUG nova.network.neutron [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Updating instance_info_cache with network_info: [{"id": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "address": "fa:16:3e:0d:51:d8", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ef1c20b-95", "ovs_interfaceid": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.225 2 DEBUG nova.network.neutron [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Successfully updated port: 5af98ddd-2cff-4fe8-abcf-414110faa17d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.234 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9235dbd9-6cfd-4ba0-9bb2-625258fd6904]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.245 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Acquiring lock "refresh_cache-7452e9a5-0e1b-4c0c-816b-57e0ea976747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.246 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Acquired lock "refresh_cache-7452e9a5-0e1b-4c0c-816b-57e0ea976747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.246 2 DEBUG nova.network.neutron [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.249 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Releasing lock "refresh_cache-3b8741f5-afdc-4745-b74c-2578bc643be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.250 2 DEBUG nova.compute.manager [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Instance network_info: |[{"id": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "address": "fa:16:3e:0d:51:d8", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ef1c20b-95", "ovs_interfaceid": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.251 2 DEBUG oslo_concurrency.lockutils [req-dbd04d76-4466-4723-a69e-f1b2de3f5efe req-f80e8cf9-14d8-42d6-a8ce-886d66ded44a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-3b8741f5-afdc-4745-b74c-2578bc643be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.251 2 DEBUG nova.network.neutron [req-dbd04d76-4466-4723-a69e-f1b2de3f5efe req-f80e8cf9-14d8-42d6-a8ce-886d66ded44a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Refreshing network info cache for port 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.256 2 DEBUG nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Start _get_guest_xml network_info=[{"id": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "address": "fa:16:3e:0d:51:d8", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ef1c20b-95", "ovs_interfaceid": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T03:52:19Z,direct_url=<?>,disk_format='raw',id=bb54f500-8a3d-4161-bee0-566f2411c985,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1983681222',owner='944395b4a11c4a9182fda518dc7bd2d8',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T03:52:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': 'bb54f500-8a3d-4161-bee0-566f2411c985'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.262 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[225e25d1-8def-4b24-9c29-e989a1615f9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.268 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f6f368-9455-4982-b00d-440763dc1793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 NetworkManager[44969]: <info>  [1760154753.2697] manager: (tap359720eb-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.284 2 WARNING nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.300 2 DEBUG nova.virt.libvirt.host [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.301 2 DEBUG nova.virt.libvirt.host [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.303 2 DEBUG nova.compute.manager [req-8f6a045b-c153-49cb-ba3e-92ed9af53b86 req-a9f6bd1b-cdcd-42d8-a6d5-ec68ea5a2ba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Received event network-changed-5af98ddd-2cff-4fe8-abcf-414110faa17d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.304 2 DEBUG nova.compute.manager [req-8f6a045b-c153-49cb-ba3e-92ed9af53b86 req-a9f6bd1b-cdcd-42d8-a6d5-ec68ea5a2ba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Refreshing instance network info cache due to event network-changed-5af98ddd-2cff-4fe8-abcf-414110faa17d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.304 2 DEBUG oslo_concurrency.lockutils [req-8f6a045b-c153-49cb-ba3e-92ed9af53b86 req-a9f6bd1b-cdcd-42d8-a6d5-ec68ea5a2ba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-7452e9a5-0e1b-4c0c-816b-57e0ea976747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.304 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[9ccf7db8-a7aa-4bc7-97a1-e4ceccb95b50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.308 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[491251b5-ba5d-4ed3-94eb-dccee1376e51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.311 2 DEBUG nova.virt.libvirt.host [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.312 2 DEBUG nova.virt.libvirt.host [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.312 2 DEBUG nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.313 2 DEBUG nova.virt.hardware [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T03:52:19Z,direct_url=<?>,disk_format='raw',id=bb54f500-8a3d-4161-bee0-566f2411c985,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1983681222',owner='944395b4a11c4a9182fda518dc7bd2d8',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T03:52:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.313 2 DEBUG nova.virt.hardware [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.313 2 DEBUG nova.virt.hardware [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.314 2 DEBUG nova.virt.hardware [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.314 2 DEBUG nova.virt.hardware [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.314 2 DEBUG nova.virt.hardware [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.314 2 DEBUG nova.virt.hardware [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.314 2 DEBUG nova.virt.hardware [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.315 2 DEBUG nova.virt.hardware [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.315 2 DEBUG nova.virt.hardware [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.315 2 DEBUG nova.virt.hardware [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.319 2 DEBUG oslo_concurrency.processutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:33 np0005480824 NetworkManager[44969]: <info>  [1760154753.3394] device (tap359720eb-a0): carrier: link connected
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.343 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc3ab64-bb08-4892-8c6b-d6e9117e2f61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.374 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b3dad0c8-131e-44f5-90c4-178fa9f97ffe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap359720eb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:90:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422100, 'reachable_time': 21933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282715, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.408 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc65de9-f33e-480f-b373-34f480a1ccba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:90b3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422100, 'tstamp': 422100}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282716, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.432 2 DEBUG nova.network.neutron [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.433 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4eececfb-5e3a-4538-ab58-2037d1d1dca1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap359720eb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:90:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422100, 'reachable_time': 21933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282717, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.475 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e183a8db-e970-4294-b23e-657b04c45799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.523 2 DEBUG nova.network.neutron [req-0985e4c5-54af-4f01-a827-f7b09da4d916 req-8fa3cfb6-37f7-4d92-9579-4f3610618cde 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Updated VIF entry in instance network info cache for port 1c319219-245c-424e-8e32-c111069f8e63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.524 2 DEBUG nova.network.neutron [req-0985e4c5-54af-4f01-a827-f7b09da4d916 req-8fa3cfb6-37f7-4d92-9579-4f3610618cde 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Updating instance_info_cache with network_info: [{"id": "1c319219-245c-424e-8e32-c111069f8e63", "address": "fa:16:3e:00:71:3d", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c319219-24", "ovs_interfaceid": "1c319219-245c-424e-8e32-c111069f8e63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.537 2 DEBUG oslo_concurrency.lockutils [req-0985e4c5-54af-4f01-a827-f7b09da4d916 req-8fa3cfb6-37f7-4d92-9579-4f3610618cde 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-92d928f8-a506-405f-8aa4-6d539d08e00f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.554 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1eb3e4-f898-41d8-97d0-06463cf75df6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.555 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359720eb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.556 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.556 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap359720eb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:33 np0005480824 NetworkManager[44969]: <info>  [1760154753.6104] manager: (tap359720eb-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Oct 10 23:52:33 np0005480824 kernel: tap359720eb-a0: entered promiscuous mode
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.615 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap359720eb-a0, col_values=(('external_ids', {'iface-id': '039c7668-0b85-4466-9c66-62531405028d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:33 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:33Z|00107|binding|INFO|Releasing lport 039c7668-0b85-4466-9c66-62531405028d from this chassis (sb_readonly=0)
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.618 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.619 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9a725a-b435-4cff-8b98-3302231a4b81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.620 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-359720eb-a957-4bcd-b9b2-3cf7dad947e4
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 359720eb-a957-4bcd-b9b2-3cf7dad947e4
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:52:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:33.620 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'env', 'PROCESS_TAG=haproxy-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/359720eb-a957-4bcd-b9b2-3cf7dad947e4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:52:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1726561294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.770 2 DEBUG oslo_concurrency.processutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.793 2 DEBUG nova.storage.rbd_utils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] rbd image 3b8741f5-afdc-4745-b74c-2578bc643be4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.798 2 DEBUG oslo_concurrency.processutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.892 2 DEBUG nova.compute.manager [req-09c63eac-5c9f-4a24-8a71-b6faf35c42c9 req-1db56af1-7851-495d-a504-15e176d71a74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Received event network-vif-plugged-1c319219-245c-424e-8e32-c111069f8e63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.892 2 DEBUG oslo_concurrency.lockutils [req-09c63eac-5c9f-4a24-8a71-b6faf35c42c9 req-1db56af1-7851-495d-a504-15e176d71a74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.893 2 DEBUG oslo_concurrency.lockutils [req-09c63eac-5c9f-4a24-8a71-b6faf35c42c9 req-1db56af1-7851-495d-a504-15e176d71a74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.893 2 DEBUG oslo_concurrency.lockutils [req-09c63eac-5c9f-4a24-8a71-b6faf35c42c9 req-1db56af1-7851-495d-a504-15e176d71a74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.893 2 DEBUG nova.compute.manager [req-09c63eac-5c9f-4a24-8a71-b6faf35c42c9 req-1db56af1-7851-495d-a504-15e176d71a74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Processing event network-vif-plugged-1c319219-245c-424e-8e32-c111069f8e63 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.894 2 DEBUG nova.compute.manager [req-09c63eac-5c9f-4a24-8a71-b6faf35c42c9 req-1db56af1-7851-495d-a504-15e176d71a74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Received event network-vif-plugged-1c319219-245c-424e-8e32-c111069f8e63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.894 2 DEBUG oslo_concurrency.lockutils [req-09c63eac-5c9f-4a24-8a71-b6faf35c42c9 req-1db56af1-7851-495d-a504-15e176d71a74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.894 2 DEBUG oslo_concurrency.lockutils [req-09c63eac-5c9f-4a24-8a71-b6faf35c42c9 req-1db56af1-7851-495d-a504-15e176d71a74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.894 2 DEBUG oslo_concurrency.lockutils [req-09c63eac-5c9f-4a24-8a71-b6faf35c42c9 req-1db56af1-7851-495d-a504-15e176d71a74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.895 2 DEBUG nova.compute.manager [req-09c63eac-5c9f-4a24-8a71-b6faf35c42c9 req-1db56af1-7851-495d-a504-15e176d71a74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] No waiting events found dispatching network-vif-plugged-1c319219-245c-424e-8e32-c111069f8e63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:52:33 np0005480824 nova_compute[260089]: 2025-10-11 03:52:33.895 2 WARNING nova.compute.manager [req-09c63eac-5c9f-4a24-8a71-b6faf35c42c9 req-1db56af1-7851-495d-a504-15e176d71a74 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Received unexpected event network-vif-plugged-1c319219-245c-424e-8e32-c111069f8e63 for instance with vm_state building and task_state spawning.#033[00m
Oct 10 23:52:34 np0005480824 podman[282842]: 2025-10-11 03:52:34.034223316 +0000 UTC m=+0.052131074 container create 5f90cba0f1257ac0fe74aed240e6bf041bfb4d061eab89acf04d4df810b50f2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 10 23:52:34 np0005480824 systemd[1]: Started libpod-conmon-5f90cba0f1257ac0fe74aed240e6bf041bfb4d061eab89acf04d4df810b50f2a.scope.
Oct 10 23:52:34 np0005480824 podman[282842]: 2025-10-11 03:52:34.006042605 +0000 UTC m=+0.023950383 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:52:34 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:52:34 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4319a45e67a2cb62520a85072b5ee9e4be15e30356f9f493ff24848eb66829/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:52:34 np0005480824 podman[282842]: 2025-10-11 03:52:34.135698768 +0000 UTC m=+0.153606546 container init 5f90cba0f1257ac0fe74aed240e6bf041bfb4d061eab89acf04d4df810b50f2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 10 23:52:34 np0005480824 podman[282842]: 2025-10-11 03:52:34.142762734 +0000 UTC m=+0.160670502 container start 5f90cba0f1257ac0fe74aed240e6bf041bfb4d061eab89acf04d4df810b50f2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:52:34 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[282857]: [NOTICE]   (282861) : New worker (282863) forked
Oct 10 23:52:34 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[282857]: [NOTICE]   (282861) : Loading success.
Oct 10 23:52:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:52:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/375774335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.256 2 DEBUG oslo_concurrency.processutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.259 2 DEBUG nova.virt.libvirt.vif [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:52:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-2046188635',display_name='tempest-TestStampPattern-server-2046188635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-2046188635',id=12,image_ref='bb54f500-8a3d-4161-bee0-566f2411c985',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAimCn5RB/FvLKTbWbetTfaBYWY7YsxfCSNDCqy+0n9wsCRn+L8WUumxgKvSs5fbSkxaZ0JLw7ssb691wNMVrABVHOJ2APu3cO2oHOABFF8LDk8lk3BSAJi4zZFoYj4Rjw==',key_name='tempest-TestStampPattern-1826930411',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='944395b4a11c4a9182fda518dc7bd2d8',ramdisk_id='',reservation_id='r-6e64hiaf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d22b35e9-badc-40d1-952e-60cdfd60decb',image_min_disk='1',image_min_ram='0',image_owner_id='944395b4a11c4a9182fda518dc7bd2d8',image_owner_project_name='tempest-TestStampPattern-358096571',image_owner_user_name='tempest-TestStampPattern-358096571-project-member',image_user_id='d6596329d9c842b78638fdbcf50b8ec8',network_allocated='True',owner_project_name='tempest-TestStampPattern-358096571',owner_user_name='tempest-TestStampPattern-358096571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:52:27Z,user_data=None,user_id='d6596329d9c842b78638fdbcf50b8ec8',uuid=3b8741f5-afdc-4745-b74c-2578bc643be4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "address": "fa:16:3e:0d:51:d8", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ef1c20b-95", "ovs_interfaceid": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.259 2 DEBUG nova.network.os_vif_util [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Converting VIF {"id": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "address": "fa:16:3e:0d:51:d8", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ef1c20b-95", "ovs_interfaceid": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.260 2 DEBUG nova.network.os_vif_util [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:51:d8,bridge_name='br-int',has_traffic_filtering=True,id=7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8,network=Network(f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ef1c20b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.262 2 DEBUG nova.objects.instance [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b8741f5-afdc-4745-b74c-2578bc643be4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.276 2 DEBUG nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  <uuid>3b8741f5-afdc-4745-b74c-2578bc643be4</uuid>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  <name>instance-0000000c</name>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <nova:name>tempest-TestStampPattern-server-2046188635</nova:name>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:52:33</nova:creationTime>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:        <nova:user uuid="d6596329d9c842b78638fdbcf50b8ec8">tempest-TestStampPattern-358096571-project-member</nova:user>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:        <nova:project uuid="944395b4a11c4a9182fda518dc7bd2d8">tempest-TestStampPattern-358096571</nova:project>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="bb54f500-8a3d-4161-bee0-566f2411c985"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:        <nova:port uuid="7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <entry name="serial">3b8741f5-afdc-4745-b74c-2578bc643be4</entry>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <entry name="uuid">3b8741f5-afdc-4745-b74c-2578bc643be4</entry>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/3b8741f5-afdc-4745-b74c-2578bc643be4_disk">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/3b8741f5-afdc-4745-b74c-2578bc643be4_disk.config">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:0d:51:d8"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <target dev="tap7ef1c20b-95"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/3b8741f5-afdc-4745-b74c-2578bc643be4/console.log" append="off"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <input type="keyboard" bus="usb"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:52:34 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:52:34 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:52:34 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:52:34 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.283 2 DEBUG nova.compute.manager [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Preparing to wait for external event network-vif-plugged-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.283 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.284 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.284 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.285 2 DEBUG nova.virt.libvirt.vif [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:52:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-2046188635',display_name='tempest-TestStampPattern-server-2046188635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-2046188635',id=12,image_ref='bb54f500-8a3d-4161-bee0-566f2411c985',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAimCn5RB/FvLKTbWbetTfaBYWY7YsxfCSNDCqy+0n9wsCRn+L8WUumxgKvSs5fbSkxaZ0JLw7ssb691wNMVrABVHOJ2APu3cO2oHOABFF8LDk8lk3BSAJi4zZFoYj4Rjw==',key_name='tempest-TestStampPattern-1826930411',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='944395b4a11c4a9182fda518dc7bd2d8',ramdisk_id='',reservation_id='r-6e64hiaf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d22b35e9-badc-40d1-952e-60cdfd60decb',image_min_disk='1',image_min_ram='0',image_owner_id='944395b4a11c4a9182fda518dc7bd2d8',image_owner_project_name='tempest-TestStampPattern-358096571',image_owner_user_name='tempest-TestStampPattern-358096571-project-member',image_user_id='d6596329d9c842b78638fdbcf50b8ec8',network_allocated='True',owner_project_name='tempest-TestStampPattern-358096571',owner_user_name='tempest-TestStampPattern-358096571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:52:27Z,user_data=None,user_id='d6596329d9c842b78638fdbcf50b8ec8',uuid=3b8741f5-afdc-4745-b74c-2578bc643be4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "address": "fa:16:3e:0d:51:d8", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ef1c20b-95", "ovs_interfaceid": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.285 2 DEBUG nova.network.os_vif_util [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Converting VIF {"id": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "address": "fa:16:3e:0d:51:d8", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ef1c20b-95", "ovs_interfaceid": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.286 2 DEBUG nova.network.os_vif_util [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:51:d8,bridge_name='br-int',has_traffic_filtering=True,id=7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8,network=Network(f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ef1c20b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.286 2 DEBUG os_vif [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:51:d8,bridge_name='br-int',has_traffic_filtering=True,id=7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8,network=Network(f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ef1c20b-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.288 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.288 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.291 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ef1c20b-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.291 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7ef1c20b-95, col_values=(('external_ids', {'iface-id': '7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:51:d8', 'vm-uuid': '3b8741f5-afdc-4745-b74c-2578bc643be4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:34 np0005480824 NetworkManager[44969]: <info>  [1760154754.2944] manager: (tap7ef1c20b-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.301 2 INFO os_vif [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:51:d8,bridge_name='br-int',has_traffic_filtering=True,id=7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8,network=Network(f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ef1c20b-95')#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.345 2 DEBUG nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.346 2 DEBUG nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.346 2 DEBUG nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No VIF found with MAC fa:16:3e:0d:51:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.347 2 INFO nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Using config drive#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.366 2 DEBUG nova.storage.rbd_utils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] rbd image 3b8741f5-afdc-4745-b74c-2578bc643be4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.651 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Updating instance_info_cache with network_info: [{"id": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "address": "fa:16:3e:10:2b:86", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d0ac82-b5", "ovs_interfaceid": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:52:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.670 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Releasing lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.671 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.672 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.673 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:52:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 295 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 2.1 MiB/s wr, 98 op/s
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.697 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.698 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.698 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.699 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:52:34 np0005480824 nova_compute[260089]: 2025-10-11 03:52:34.699 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:52:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1885994321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.117 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.210 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.211 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.220 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.221 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.227 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.227 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.390 2 INFO nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Creating config drive at /var/lib/nova/instances/3b8741f5-afdc-4745-b74c-2578bc643be4/disk.config#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.396 2 DEBUG oslo_concurrency.processutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3b8741f5-afdc-4745-b74c-2578bc643be4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjdsina9q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.434 2 DEBUG nova.network.neutron [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Updating instance_info_cache with network_info: [{"id": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "address": "fa:16:3e:79:18:58", "network": {"id": "1ac3beb3-eeb0-47be-b56e-672742cfe517", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1691573125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a65cd418eaad4366991b123d6535a576", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5af98ddd-2c", "ovs_interfaceid": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.468 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Releasing lock "refresh_cache-7452e9a5-0e1b-4c0c-816b-57e0ea976747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.469 2 DEBUG nova.compute.manager [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Instance network_info: |[{"id": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "address": "fa:16:3e:79:18:58", "network": {"id": "1ac3beb3-eeb0-47be-b56e-672742cfe517", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1691573125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a65cd418eaad4366991b123d6535a576", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5af98ddd-2c", "ovs_interfaceid": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.471 2 DEBUG oslo_concurrency.lockutils [req-8f6a045b-c153-49cb-ba3e-92ed9af53b86 req-a9f6bd1b-cdcd-42d8-a6d5-ec68ea5a2ba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-7452e9a5-0e1b-4c0c-816b-57e0ea976747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.472 2 DEBUG nova.network.neutron [req-8f6a045b-c153-49cb-ba3e-92ed9af53b86 req-a9f6bd1b-cdcd-42d8-a6d5-ec68ea5a2ba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Refreshing network info cache for port 5af98ddd-2cff-4fe8-abcf-414110faa17d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.478 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Start _get_guest_xml network_info=[{"id": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "address": "fa:16:3e:79:18:58", "network": {"id": "1ac3beb3-eeb0-47be-b56e-672742cfe517", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1691573125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a65cd418eaad4366991b123d6535a576", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5af98ddd-2c", "ovs_interfaceid": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.489 2 WARNING nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.504 2 DEBUG nova.virt.libvirt.host [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.505 2 DEBUG nova.virt.libvirt.host [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.511 2 DEBUG nova.virt.libvirt.host [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.512 2 DEBUG nova.virt.libvirt.host [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.513 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.513 2 DEBUG nova.virt.hardware [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.514 2 DEBUG nova.virt.hardware [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.515 2 DEBUG nova.virt.hardware [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.515 2 DEBUG nova.virt.hardware [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.516 2 DEBUG nova.virt.hardware [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.516 2 DEBUG nova.virt.hardware [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.517 2 DEBUG nova.virt.hardware [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.517 2 DEBUG nova.virt.hardware [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.518 2 DEBUG nova.virt.hardware [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.518 2 DEBUG nova.virt.hardware [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.518 2 DEBUG nova.virt.hardware [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.524 2 DEBUG oslo_concurrency.processutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.563 2 DEBUG oslo_concurrency.processutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3b8741f5-afdc-4745-b74c-2578bc643be4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjdsina9q" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.605 2 DEBUG nova.storage.rbd_utils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] rbd image 3b8741f5-afdc-4745-b74c-2578bc643be4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.611 2 DEBUG oslo_concurrency.processutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3b8741f5-afdc-4745-b74c-2578bc643be4/disk.config 3b8741f5-afdc-4745-b74c-2578bc643be4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.799 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.803 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4291MB free_disk=59.92184066772461GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.803 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.804 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.852 2 DEBUG oslo_concurrency.processutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3b8741f5-afdc-4745-b74c-2578bc643be4/disk.config 3b8741f5-afdc-4745-b74c-2578bc643be4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.854 2 INFO nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Deleting local config drive /var/lib/nova/instances/3b8741f5-afdc-4745-b74c-2578bc643be4/disk.config because it was imported into RBD.#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.896 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance d22b35e9-badc-40d1-952e-60cdfd60decb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.897 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 92d928f8-a506-405f-8aa4-6d539d08e00f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.897 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 3b8741f5-afdc-4745-b74c-2578bc643be4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.897 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 7452e9a5-0e1b-4c0c-816b-57e0ea976747 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.898 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.898 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:52:35 np0005480824 kernel: tap7ef1c20b-95: entered promiscuous mode
Oct 10 23:52:35 np0005480824 NetworkManager[44969]: <info>  [1760154755.9258] manager: (tap7ef1c20b-95): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Oct 10 23:52:35 np0005480824 systemd-udevd[282699]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:52:35 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:35Z|00108|binding|INFO|Claiming lport 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 for this chassis.
Oct 10 23:52:35 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:35Z|00109|binding|INFO|7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8: Claiming fa:16:3e:0d:51:d8 10.100.0.5
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:35.937 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:51:d8 10.100.0.5'], port_security=['fa:16:3e:0d:51:d8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3b8741f5-afdc-4745-b74c-2578bc643be4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '944395b4a11c4a9182fda518dc7bd2d8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e77eea50-c642-4f6c-8fc0-1335adf52ced', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9edb3820-196e-493d-adad-15b8aa8d51aa, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:52:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:35.939 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 in datapath f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e bound to our chassis#033[00m
Oct 10 23:52:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:35.943 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e#033[00m
Oct 10 23:52:35 np0005480824 NetworkManager[44969]: <info>  [1760154755.9488] device (tap7ef1c20b-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:52:35 np0005480824 NetworkManager[44969]: <info>  [1760154755.9506] device (tap7ef1c20b-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:52:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:35.974 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e2284598-6349-466e-9838-eb7c92af6420]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:35 np0005480824 systemd-machined[215071]: New machine qemu-12-instance-0000000c.
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:35 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:35Z|00110|binding|INFO|Setting lport 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 ovn-installed in OVS
Oct 10 23:52:35 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:35Z|00111|binding|INFO|Setting lport 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 up in Southbound
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:35 np0005480824 nova_compute[260089]: 2025-10-11 03:52:35.993 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:35 np0005480824 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Oct 10 23:52:36 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:36.027 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[da7ff21d-2c92-4092-b42f-d409034347ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:36 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:36.031 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[15a1022f-be7e-4cb8-b0f5-63e024618718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:52:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2207726882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:52:36 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:36.075 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd9ae5d-8636-48a0-b03e-9187dd2588f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.083 2 DEBUG oslo_concurrency.processutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:36 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:36.114 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[cba8866f-8e2d-4745-b2c0-dc9babf4001d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0e7e6a7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:23:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416214, 'reachable_time': 37142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283008, 'error': None, 'target': 'ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.116 2 DEBUG nova.storage.rbd_utils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] rbd image 7452e9a5-0e1b-4c0c-816b-57e0ea976747_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.121 2 DEBUG oslo_concurrency.processutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:36 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:36.140 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[a1206c5f-dba4-49b3-8c00-f371596ac004]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf0e7e6a7-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 416226, 'tstamp': 416226}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283028, 'error': None, 'target': 'ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf0e7e6a7-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 416228, 'tstamp': 416228}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283028, 'error': None, 'target': 'ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:36 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:36.143 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0e7e6a7-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:36 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:36.148 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0e7e6a7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:36 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:36.148 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:52:36 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:36.149 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0e7e6a7-10, col_values=(('external_ids', {'iface-id': 'fd35b05a-29b5-4478-aa1a-5883664f9c48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:36 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:36.149 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:52:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1880400051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.455 2 DEBUG nova.network.neutron [req-dbd04d76-4466-4723-a69e-f1b2de3f5efe req-f80e8cf9-14d8-42d6-a8ce-886d66ded44a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Updated VIF entry in instance network info cache for port 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.456 2 DEBUG nova.network.neutron [req-dbd04d76-4466-4723-a69e-f1b2de3f5efe req-f80e8cf9-14d8-42d6-a8ce-886d66ded44a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Updating instance_info_cache with network_info: [{"id": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "address": "fa:16:3e:0d:51:d8", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ef1c20b-95", "ovs_interfaceid": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.467 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.476 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.480 2 DEBUG oslo_concurrency.lockutils [req-dbd04d76-4466-4723-a69e-f1b2de3f5efe req-f80e8cf9-14d8-42d6-a8ce-886d66ded44a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-3b8741f5-afdc-4745-b74c-2578bc643be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.493 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.530 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.531 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.539 2 DEBUG nova.compute.manager [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.540 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154756.5390134, 92d928f8-a506-405f-8aa4-6d539d08e00f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.541 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] VM Started (Lifecycle Event)#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.546 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.550 2 INFO nova.virt.libvirt.driver [-] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Instance spawned successfully.#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.551 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:52:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:52:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3281727436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.573 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.587 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.592 2 DEBUG oslo_concurrency.processutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.593 2 DEBUG nova.virt.libvirt.vif [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:52:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-2122000193',display_name='tempest-VolumesExtendAttachedTest-instance-2122000193',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-2122000193',id=13,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAB43gKek6h5eWr8uy3dGQ4wGOfOJNWCIFn83OQ1V9D+dUeP1elAFzU/6cuwBFhnCFRlGKa19y6oD8NsYmuKvToMTw3i+pr/atntuAIFJNEtBIzMWZe8V5JBAXH4tBd+aA==',key_name='tempest-keypair-1899177926',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a65cd418eaad4366991b123d6535a576',ramdisk_id='',reservation_id='r-61y4vi8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-1964542468',owner_user_name='tempest-VolumesExtendAttachedTest-1964542468-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:52:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cde6845b6b8d482b95a72e38b1db93d3',uuid=7452e9a5-0e1b-4c0c-816b-57e0ea976747,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "address": "fa:16:3e:79:18:58", "network": {"id": "1ac3beb3-eeb0-47be-b56e-672742cfe517", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1691573125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a65cd418eaad4366991b123d6535a576", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5af98ddd-2c", "ovs_interfaceid": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.593 2 DEBUG nova.network.os_vif_util [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Converting VIF {"id": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "address": "fa:16:3e:79:18:58", "network": {"id": "1ac3beb3-eeb0-47be-b56e-672742cfe517", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1691573125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a65cd418eaad4366991b123d6535a576", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5af98ddd-2c", "ovs_interfaceid": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.594 2 DEBUG nova.network.os_vif_util [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:18:58,bridge_name='br-int',has_traffic_filtering=True,id=5af98ddd-2cff-4fe8-abcf-414110faa17d,network=Network(1ac3beb3-eeb0-47be-b56e-672742cfe517),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5af98ddd-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.596 2 DEBUG nova.objects.instance [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7452e9a5-0e1b-4c0c-816b-57e0ea976747 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.598 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.598 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.599 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.599 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.600 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.600 2 DEBUG nova.virt.libvirt.driver [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.608 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.608 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154756.5405686, 92d928f8-a506-405f-8aa4-6d539d08e00f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.609 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.640 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  <uuid>7452e9a5-0e1b-4c0c-816b-57e0ea976747</uuid>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  <name>instance-0000000d</name>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <nova:name>tempest-VolumesExtendAttachedTest-instance-2122000193</nova:name>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:52:35</nova:creationTime>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:        <nova:user uuid="cde6845b6b8d482b95a72e38b1db93d3">tempest-VolumesExtendAttachedTest-1964542468-project-member</nova:user>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:        <nova:project uuid="a65cd418eaad4366991b123d6535a576">tempest-VolumesExtendAttachedTest-1964542468</nova:project>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:        <nova:port uuid="5af98ddd-2cff-4fe8-abcf-414110faa17d">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <entry name="serial">7452e9a5-0e1b-4c0c-816b-57e0ea976747</entry>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <entry name="uuid">7452e9a5-0e1b-4c0c-816b-57e0ea976747</entry>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/7452e9a5-0e1b-4c0c-816b-57e0ea976747_disk">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/7452e9a5-0e1b-4c0c-816b-57e0ea976747_disk.config">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:79:18:58"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <target dev="tap5af98ddd-2c"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/7452e9a5-0e1b-4c0c-816b-57e0ea976747/console.log" append="off"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:52:36 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:52:36 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:52:36 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:52:36 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.641 2 DEBUG nova.compute.manager [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Preparing to wait for external event network-vif-plugged-5af98ddd-2cff-4fe8-abcf-414110faa17d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.642 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Acquiring lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.642 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.642 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.643 2 DEBUG nova.virt.libvirt.vif [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:52:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-2122000193',display_name='tempest-VolumesExtendAttachedTest-instance-2122000193',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-2122000193',id=13,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAB43gKek6h5eWr8uy3dGQ4wGOfOJNWCIFn83OQ1V9D+dUeP1elAFzU/6cuwBFhnCFRlGKa19y6oD8NsYmuKvToMTw3i+pr/atntuAIFJNEtBIzMWZe8V5JBAXH4tBd+aA==',key_name='tempest-keypair-1899177926',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a65cd418eaad4366991b123d6535a576',ramdisk_id='',reservation_id='r-61y4vi8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-1964542468',owner_user_name='tempest-VolumesExtendAttachedTest-1964542468-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:52:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cde6845b6b8d482b95a72e38b1db93d3',uuid=7452e9a5-0e1b-4c0c-816b-57e0ea976747,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "address": "fa:16:3e:79:18:58", "network": {"id": "1ac3beb3-eeb0-47be-b56e-672742cfe517", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1691573125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a65cd418eaad4366991b123d6535a576", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5af98ddd-2c", "ovs_interfaceid": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.643 2 DEBUG nova.network.os_vif_util [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Converting VIF {"id": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "address": "fa:16:3e:79:18:58", "network": {"id": "1ac3beb3-eeb0-47be-b56e-672742cfe517", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1691573125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a65cd418eaad4366991b123d6535a576", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5af98ddd-2c", "ovs_interfaceid": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.644 2 DEBUG nova.network.os_vif_util [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:18:58,bridge_name='br-int',has_traffic_filtering=True,id=5af98ddd-2cff-4fe8-abcf-414110faa17d,network=Network(1ac3beb3-eeb0-47be-b56e-672742cfe517),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5af98ddd-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.645 2 DEBUG os_vif [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:18:58,bridge_name='br-int',has_traffic_filtering=True,id=5af98ddd-2cff-4fe8-abcf-414110faa17d,network=Network(1ac3beb3-eeb0-47be-b56e-672742cfe517),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5af98ddd-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.648 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.652 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5af98ddd-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.653 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5af98ddd-2c, col_values=(('external_ids', {'iface-id': '5af98ddd-2cff-4fe8-abcf-414110faa17d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:79:18:58', 'vm-uuid': '7452e9a5-0e1b-4c0c-816b-57e0ea976747'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.654 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154756.5494094, 92d928f8-a506-405f-8aa4-6d539d08e00f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:52:36 np0005480824 NetworkManager[44969]: <info>  [1760154756.6559] manager: (tap5af98ddd-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.655 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.663 2 INFO os_vif [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:18:58,bridge_name='br-int',has_traffic_filtering=True,id=5af98ddd-2cff-4fe8-abcf-414110faa17d,network=Network(1ac3beb3-eeb0-47be-b56e-672742cfe517),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5af98ddd-2c')#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.678 2 DEBUG nova.compute.manager [req-4fd9e1e2-c187-4b4a-bedc-21fe8fdf54a0 req-ef5ed91c-4f5a-4e9c-8195-e6a3a47d2fe1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Received event network-vif-plugged-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.679 2 DEBUG oslo_concurrency.lockutils [req-4fd9e1e2-c187-4b4a-bedc-21fe8fdf54a0 req-ef5ed91c-4f5a-4e9c-8195-e6a3a47d2fe1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.679 2 DEBUG oslo_concurrency.lockutils [req-4fd9e1e2-c187-4b4a-bedc-21fe8fdf54a0 req-ef5ed91c-4f5a-4e9c-8195-e6a3a47d2fe1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.679 2 DEBUG oslo_concurrency.lockutils [req-4fd9e1e2-c187-4b4a-bedc-21fe8fdf54a0 req-ef5ed91c-4f5a-4e9c-8195-e6a3a47d2fe1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.680 2 DEBUG nova.compute.manager [req-4fd9e1e2-c187-4b4a-bedc-21fe8fdf54a0 req-ef5ed91c-4f5a-4e9c-8195-e6a3a47d2fe1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Processing event network-vif-plugged-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.681 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.686 2 INFO nova.compute.manager [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Took 8.54 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.686 2 DEBUG nova.compute.manager [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.689 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:52:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 295 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 2.1 MiB/s wr, 98 op/s
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.724 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.761 2 INFO nova.compute.manager [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Took 10.89 seconds to build instance.#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.770 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.771 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.771 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] No VIF found with MAC fa:16:3e:79:18:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.771 2 INFO nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Using config drive#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.792 2 DEBUG nova.storage.rbd_utils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] rbd image 7452e9a5-0e1b-4c0c-816b-57e0ea976747_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:36 np0005480824 nova_compute[260089]: 2025-10-11 03:52:36.799 2 DEBUG oslo_concurrency.lockutils [None req-c6e48ecb-4d41-43ee-be80-69582286d202 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.086 2 INFO nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Creating config drive at /var/lib/nova/instances/7452e9a5-0e1b-4c0c-816b-57e0ea976747/disk.config#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.092 2 DEBUG oslo_concurrency.processutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7452e9a5-0e1b-4c0c-816b-57e0ea976747/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj2dh9jhx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.225 2 DEBUG oslo_concurrency.processutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7452e9a5-0e1b-4c0c-816b-57e0ea976747/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj2dh9jhx" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.249 2 DEBUG nova.storage.rbd_utils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] rbd image 7452e9a5-0e1b-4c0c-816b-57e0ea976747_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.252 2 DEBUG oslo_concurrency.processutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7452e9a5-0e1b-4c0c-816b-57e0ea976747/disk.config 7452e9a5-0e1b-4c0c-816b-57e0ea976747_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.273 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154757.2284558, 3b8741f5-afdc-4745-b74c-2578bc643be4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.273 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] VM Started (Lifecycle Event)#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.277 2 DEBUG nova.network.neutron [req-8f6a045b-c153-49cb-ba3e-92ed9af53b86 req-a9f6bd1b-cdcd-42d8-a6d5-ec68ea5a2ba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Updated VIF entry in instance network info cache for port 5af98ddd-2cff-4fe8-abcf-414110faa17d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.277 2 DEBUG nova.network.neutron [req-8f6a045b-c153-49cb-ba3e-92ed9af53b86 req-a9f6bd1b-cdcd-42d8-a6d5-ec68ea5a2ba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Updating instance_info_cache with network_info: [{"id": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "address": "fa:16:3e:79:18:58", "network": {"id": "1ac3beb3-eeb0-47be-b56e-672742cfe517", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1691573125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a65cd418eaad4366991b123d6535a576", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5af98ddd-2c", "ovs_interfaceid": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.279 2 DEBUG nova.compute.manager [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.282 2 DEBUG nova.virt.libvirt.driver [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.289 2 INFO nova.virt.libvirt.driver [-] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Instance spawned successfully.#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.289 2 INFO nova.compute.manager [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Took 9.67 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.290 2 DEBUG nova.compute.manager [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.297 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.298 2 DEBUG oslo_concurrency.lockutils [req-8f6a045b-c153-49cb-ba3e-92ed9af53b86 req-a9f6bd1b-cdcd-42d8-a6d5-ec68ea5a2ba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-7452e9a5-0e1b-4c0c-816b-57e0ea976747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.300 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.327 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.328 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154757.2285802, 3b8741f5-afdc-4745-b74c-2578bc643be4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.328 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.347 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.357 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154757.2821891, 3b8741f5-afdc-4745-b74c-2578bc643be4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.357 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.359 2 INFO nova.compute.manager [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Took 10.67 seconds to build instance.#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.377 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.378 2 DEBUG oslo_concurrency.lockutils [None req-f172ad88-6ef4-4caa-9d4b-fcaf2b1e3448 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.381 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.429 2 DEBUG oslo_concurrency.processutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7452e9a5-0e1b-4c0c-816b-57e0ea976747/disk.config 7452e9a5-0e1b-4c0c-816b-57e0ea976747_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.430 2 INFO nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Deleting local config drive /var/lib/nova/instances/7452e9a5-0e1b-4c0c-816b-57e0ea976747/disk.config because it was imported into RBD.#033[00m
Oct 10 23:52:37 np0005480824 kernel: tap5af98ddd-2c: entered promiscuous mode
Oct 10 23:52:37 np0005480824 NetworkManager[44969]: <info>  [1760154757.4994] manager: (tap5af98ddd-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Oct 10 23:52:37 np0005480824 systemd-udevd[283117]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:37 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:37Z|00112|binding|INFO|Claiming lport 5af98ddd-2cff-4fe8-abcf-414110faa17d for this chassis.
Oct 10 23:52:37 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:37Z|00113|binding|INFO|5af98ddd-2cff-4fe8-abcf-414110faa17d: Claiming fa:16:3e:79:18:58 10.100.0.6
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.514 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:18:58 10.100.0.6'], port_security=['fa:16:3e:79:18:58 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7452e9a5-0e1b-4c0c-816b-57e0ea976747', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ac3beb3-eeb0-47be-b56e-672742cfe517', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a65cd418eaad4366991b123d6535a576', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1583d4a5-79bd-48da-8c70-83dbe437f172', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c58f84d5-6196-4ce5-aee9-a8bfac4d946a, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=5af98ddd-2cff-4fe8-abcf-414110faa17d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.516 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 5af98ddd-2cff-4fe8-abcf-414110faa17d in datapath 1ac3beb3-eeb0-47be-b56e-672742cfe517 bound to our chassis#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.518 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ac3beb3-eeb0-47be-b56e-672742cfe517#033[00m
Oct 10 23:52:37 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:37Z|00114|binding|INFO|Setting lport 5af98ddd-2cff-4fe8-abcf-414110faa17d ovn-installed in OVS
Oct 10 23:52:37 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:37Z|00115|binding|INFO|Setting lport 5af98ddd-2cff-4fe8-abcf-414110faa17d up in Southbound
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:37 np0005480824 NetworkManager[44969]: <info>  [1760154757.5349] device (tap5af98ddd-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:52:37 np0005480824 NetworkManager[44969]: <info>  [1760154757.5362] device (tap5af98ddd-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.539 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[79492e5e-ab93-4bf5-8e52-23d26d3b269b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.540 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1ac3beb3-e1 in ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.542 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1ac3beb3-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.542 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7cdd3973-0474-480e-82a5-fb000e169f5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.543 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c0d456-45c6-41e3-867f-9ff5795ff943]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 systemd-machined[215071]: New machine qemu-13-instance-0000000d.
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.557 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[ab167ca0-72a5-4368-a53f-180522e21cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.590 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2a917a-6165-44d5-83a2-ce70f477988f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.657 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[00566785-48dc-4f31-983e-7563908e0603]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 NetworkManager[44969]: <info>  [1760154757.6688] manager: (tap1ac3beb3-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.666 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1f695e5c-07db-4366-9c17-e39608e78a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.719 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb33770-66f8-41ab-831c-cda29631d0d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.723 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[b54549af-d352-4562-9bd5-30338c54e215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 NetworkManager[44969]: <info>  [1760154757.7535] device (tap1ac3beb3-e0): carrier: link connected
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.763 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[095bdd65-c82c-4926-b94c-cc5054725489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.792 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[11b221e9-6eba-4aa3-8ae8-7454868e29b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ac3beb3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:19:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422541, 'reachable_time': 24221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283221, 'error': None, 'target': 'ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.814 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c70aaddf-74de-4fb9-95c7-b401803cc6f6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:19e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422541, 'tstamp': 422541}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283222, 'error': None, 'target': 'ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.835 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[0113e922-6311-4b0c-aea6-92df4a8cd875]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ac3beb3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:19:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422541, 'reachable_time': 24221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283223, 'error': None, 'target': 'ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.875 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e36abad6-1954-45a9-9d20-3f8f184e157d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.961 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ffcf6c9b-ee30-46dc-87df-c5eb0450f038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.964 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ac3beb3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.964 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.965 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ac3beb3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:37 np0005480824 kernel: tap1ac3beb3-e0: entered promiscuous mode
Oct 10 23:52:37 np0005480824 NetworkManager[44969]: <info>  [1760154757.9694] manager: (tap1ac3beb3-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.976 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ac3beb3-e0, col_values=(('external_ids', {'iface-id': '1548f162-a251-4e9c-8d53-f666bf452295'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:37 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:37Z|00116|binding|INFO|Releasing lport 1548f162-a251-4e9c-8d53-f666bf452295 from this chassis (sb_readonly=0)
Oct 10 23:52:37 np0005480824 nova_compute[260089]: 2025-10-11 03:52:37.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.981 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1ac3beb3-eeb0-47be-b56e-672742cfe517.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1ac3beb3-eeb0-47be-b56e-672742cfe517.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.983 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0ba969-afbc-45d5-861a-7c8e68529697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001107559358110308 of space, bias 1.0, pg target 0.3322678074330924 quantized to 32 (current 32)
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003839606938204076 of space, bias 1.0, pg target 0.11518820814612228 quantized to 32 (current 32)
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014247498087191508 of space, bias 1.0, pg target 0.42742494261574526 quantized to 32 (current 32)
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:52:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.985 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-1ac3beb3-eeb0-47be-b56e-672742cfe517
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/1ac3beb3-eeb0-47be-b56e-672742cfe517.pid.haproxy
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 1ac3beb3-eeb0-47be-b56e-672742cfe517
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:52:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:37.990 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517', 'env', 'PROCESS_TAG=haproxy-1ac3beb3-eeb0-47be-b56e-672742cfe517', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1ac3beb3-eeb0-47be-b56e-672742cfe517.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:38 np0005480824 podman[283297]: 2025-10-11 03:52:38.418819628 +0000 UTC m=+0.068337444 container create de26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 10 23:52:38 np0005480824 systemd[1]: Started libpod-conmon-de26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236.scope.
Oct 10 23:52:38 np0005480824 podman[283297]: 2025-10-11 03:52:38.382568598 +0000 UTC m=+0.032086444 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:52:38 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:52:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19d7f1903ba0ce5add9fa11a7df2be13ae865e2c01ee379d5dbbb93a783fe31e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.498 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154758.4976413, 7452e9a5-0e1b-4c0c-816b-57e0ea976747 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.500 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] VM Started (Lifecycle Event)#033[00m
Oct 10 23:52:38 np0005480824 podman[283297]: 2025-10-11 03:52:38.511900054 +0000 UTC m=+0.161417870 container init de26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:52:38 np0005480824 podman[283297]: 2025-10-11 03:52:38.518470519 +0000 UTC m=+0.167988325 container start de26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.526 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.531 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154758.4977515, 7452e9a5-0e1b-4c0c-816b-57e0ea976747 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.532 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:52:38 np0005480824 neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517[283313]: [NOTICE]   (283334) : New worker (283336) forked
Oct 10 23:52:38 np0005480824 neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517[283313]: [NOTICE]   (283334) : Loading success.
Oct 10 23:52:38 np0005480824 podman[283310]: 2025-10-11 03:52:38.545415841 +0000 UTC m=+0.082247982 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.556 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.559 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.578 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 295 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.789 2 DEBUG nova.compute.manager [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Received event network-vif-plugged-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.790 2 DEBUG oslo_concurrency.lockutils [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.792 2 DEBUG oslo_concurrency.lockutils [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.792 2 DEBUG oslo_concurrency.lockutils [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.793 2 DEBUG nova.compute.manager [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] No waiting events found dispatching network-vif-plugged-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.793 2 WARNING nova.compute.manager [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Received unexpected event network-vif-plugged-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.794 2 DEBUG nova.compute.manager [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Received event network-vif-plugged-5af98ddd-2cff-4fe8-abcf-414110faa17d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.795 2 DEBUG oslo_concurrency.lockutils [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.795 2 DEBUG oslo_concurrency.lockutils [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.796 2 DEBUG oslo_concurrency.lockutils [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.797 2 DEBUG nova.compute.manager [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Processing event network-vif-plugged-5af98ddd-2cff-4fe8-abcf-414110faa17d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.797 2 DEBUG nova.compute.manager [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Received event network-vif-plugged-5af98ddd-2cff-4fe8-abcf-414110faa17d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.798 2 DEBUG oslo_concurrency.lockutils [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.799 2 DEBUG oslo_concurrency.lockutils [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.799 2 DEBUG oslo_concurrency.lockutils [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.800 2 DEBUG nova.compute.manager [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] No waiting events found dispatching network-vif-plugged-5af98ddd-2cff-4fe8-abcf-414110faa17d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.801 2 WARNING nova.compute.manager [req-3876fb22-281f-4151-a5d2-42f83f149460 req-0e5a21c9-7db2-4564-b9db-9041c41e5090 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Received unexpected event network-vif-plugged-5af98ddd-2cff-4fe8-abcf-414110faa17d for instance with vm_state building and task_state spawning.#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.803 2 DEBUG nova.compute.manager [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.810 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154758.808899, 7452e9a5-0e1b-4c0c-816b-57e0ea976747 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.811 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.814 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.819 2 INFO nova.virt.libvirt.driver [-] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Instance spawned successfully.#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.820 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.838 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.853 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.860 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.861 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.862 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.863 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.864 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.865 2 DEBUG nova.virt.libvirt.driver [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.875 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.930 2 INFO nova.compute.manager [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Took 10.30 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.931 2 DEBUG nova.compute.manager [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:38 np0005480824 nova_compute[260089]: 2025-10-11 03:52:38.995 2 INFO nova.compute.manager [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Took 11.38 seconds to build instance.#033[00m
Oct 10 23:52:39 np0005480824 nova_compute[260089]: 2025-10-11 03:52:39.016 2 DEBUG oslo_concurrency.lockutils [None req-27b6f863-2e16-490d-82be-43632fe72257 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:39 np0005480824 nova_compute[260089]: 2025-10-11 03:52:39.157 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:52:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.101 2 DEBUG oslo_concurrency.lockutils [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "92d928f8-a506-405f-8aa4-6d539d08e00f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.102 2 DEBUG oslo_concurrency.lockutils [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.103 2 DEBUG oslo_concurrency.lockutils [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.103 2 DEBUG oslo_concurrency.lockutils [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.103 2 DEBUG oslo_concurrency.lockutils [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.104 2 INFO nova.compute.manager [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Terminating instance#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.105 2 DEBUG nova.compute.manager [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:52:40 np0005480824 kernel: tap1c319219-24 (unregistering): left promiscuous mode
Oct 10 23:52:40 np0005480824 NetworkManager[44969]: <info>  [1760154760.1562] device (tap1c319219-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:52:40 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:40Z|00117|binding|INFO|Releasing lport 1c319219-245c-424e-8e32-c111069f8e63 from this chassis (sb_readonly=0)
Oct 10 23:52:40 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:40Z|00118|binding|INFO|Setting lport 1c319219-245c-424e-8e32-c111069f8e63 down in Southbound
Oct 10 23:52:40 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:40Z|00119|binding|INFO|Removing iface tap1c319219-24 ovn-installed in OVS
Oct 10 23:52:40 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:40.184 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:71:3d 10.100.0.9'], port_security=['fa:16:3e:00:71:3d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '92d928f8-a506-405f-8aa4-6d539d08e00f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d21391a321476eb133317b3402b0f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fbe6632-cce1-48fb-95c1-bed1096fc071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d98e64fb-092d-4777-b741-426f3e849bc3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=1c319219-245c-424e-8e32-c111069f8e63) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:52:40 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:40.186 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 1c319219-245c-424e-8e32-c111069f8e63 in datapath 359720eb-a957-4bcd-b9b2-3cf7dad947e4 unbound from our chassis#033[00m
Oct 10 23:52:40 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:40.188 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 359720eb-a957-4bcd-b9b2-3cf7dad947e4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:52:40 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:40.190 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[006cbe86-0943-4363-9be1-07fb6a440421]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:40 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:40.192 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 namespace which is not needed anymore#033[00m
Oct 10 23:52:40 np0005480824 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 10 23:52:40 np0005480824 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 3.682s CPU time.
Oct 10 23:52:40 np0005480824 systemd-machined[215071]: Machine qemu-11-instance-0000000b terminated.
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.342 2 INFO nova.virt.libvirt.driver [-] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Instance destroyed successfully.#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.343 2 DEBUG nova.objects.instance [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lazy-loading 'resources' on Instance uuid 92d928f8-a506-405f-8aa4-6d539d08e00f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:52:40 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[282857]: [NOTICE]   (282861) : haproxy version is 2.8.14-c23fe91
Oct 10 23:52:40 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[282857]: [NOTICE]   (282861) : path to executable is /usr/sbin/haproxy
Oct 10 23:52:40 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[282857]: [WARNING]  (282861) : Exiting Master process...
Oct 10 23:52:40 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[282857]: [WARNING]  (282861) : Exiting Master process...
Oct 10 23:52:40 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[282857]: [ALERT]    (282861) : Current worker (282863) exited with code 143 (Terminated)
Oct 10 23:52:40 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[282857]: [WARNING]  (282861) : All workers exited. Exiting... (0)
Oct 10 23:52:40 np0005480824 systemd[1]: libpod-5f90cba0f1257ac0fe74aed240e6bf041bfb4d061eab89acf04d4df810b50f2a.scope: Deactivated successfully.
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.362 2 DEBUG nova.virt.libvirt.vif [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-32240042',display_name='tempest-TestVolumeBootPattern-server-32240042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-32240042',id=11,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:52:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-4r50w6ew',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:52:36Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=92d928f8-a506-405f-8aa4-6d539d08e00f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c319219-245c-424e-8e32-c111069f8e63", "address": "fa:16:3e:00:71:3d", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c319219-24", "ovs_interfaceid": "1c319219-245c-424e-8e32-c111069f8e63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.363 2 DEBUG nova.network.os_vif_util [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "1c319219-245c-424e-8e32-c111069f8e63", "address": "fa:16:3e:00:71:3d", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c319219-24", "ovs_interfaceid": "1c319219-245c-424e-8e32-c111069f8e63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.363 2 DEBUG nova.network.os_vif_util [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:71:3d,bridge_name='br-int',has_traffic_filtering=True,id=1c319219-245c-424e-8e32-c111069f8e63,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c319219-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.364 2 DEBUG os_vif [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:71:3d,bridge_name='br-int',has_traffic_filtering=True,id=1c319219-245c-424e-8e32-c111069f8e63,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c319219-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.366 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c319219-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:40 np0005480824 podman[283367]: 2025-10-11 03:52:40.368713549 +0000 UTC m=+0.069082473 container died 5f90cba0f1257ac0fe74aed240e6bf041bfb4d061eab89acf04d4df810b50f2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.372 2 INFO os_vif [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:71:3d,bridge_name='br-int',has_traffic_filtering=True,id=1c319219-245c-424e-8e32-c111069f8e63,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c319219-24')#033[00m
Oct 10 23:52:40 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f90cba0f1257ac0fe74aed240e6bf041bfb4d061eab89acf04d4df810b50f2a-userdata-shm.mount: Deactivated successfully.
Oct 10 23:52:40 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ce4319a45e67a2cb62520a85072b5ee9e4be15e30356f9f493ff24848eb66829-merged.mount: Deactivated successfully.
Oct 10 23:52:40 np0005480824 podman[283367]: 2025-10-11 03:52:40.413313546 +0000 UTC m=+0.113682470 container cleanup 5f90cba0f1257ac0fe74aed240e6bf041bfb4d061eab89acf04d4df810b50f2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 10 23:52:40 np0005480824 systemd[1]: libpod-conmon-5f90cba0f1257ac0fe74aed240e6bf041bfb4d061eab89acf04d4df810b50f2a.scope: Deactivated successfully.
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.486 2 DEBUG nova.compute.manager [req-7b6e142f-249f-41b4-b28a-5e0e412df392 req-0c2adc86-0979-4c4f-9c15-0b7d355ffdaa 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Received event network-vif-unplugged-1c319219-245c-424e-8e32-c111069f8e63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.486 2 DEBUG oslo_concurrency.lockutils [req-7b6e142f-249f-41b4-b28a-5e0e412df392 req-0c2adc86-0979-4c4f-9c15-0b7d355ffdaa 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.487 2 DEBUG oslo_concurrency.lockutils [req-7b6e142f-249f-41b4-b28a-5e0e412df392 req-0c2adc86-0979-4c4f-9c15-0b7d355ffdaa 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.487 2 DEBUG oslo_concurrency.lockutils [req-7b6e142f-249f-41b4-b28a-5e0e412df392 req-0c2adc86-0979-4c4f-9c15-0b7d355ffdaa 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.487 2 DEBUG nova.compute.manager [req-7b6e142f-249f-41b4-b28a-5e0e412df392 req-0c2adc86-0979-4c4f-9c15-0b7d355ffdaa 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] No waiting events found dispatching network-vif-unplugged-1c319219-245c-424e-8e32-c111069f8e63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.487 2 DEBUG nova.compute.manager [req-7b6e142f-249f-41b4-b28a-5e0e412df392 req-0c2adc86-0979-4c4f-9c15-0b7d355ffdaa 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Received event network-vif-unplugged-1c319219-245c-424e-8e32-c111069f8e63 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:52:40 np0005480824 podman[283426]: 2025-10-11 03:52:40.490705003 +0000 UTC m=+0.052226307 container remove 5f90cba0f1257ac0fe74aed240e6bf041bfb4d061eab89acf04d4df810b50f2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 10 23:52:40 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:40.499 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[583ddcb7-5e79-49ef-bcb8-e13d04af4d0a]: (4, ('Sat Oct 11 03:52:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 (5f90cba0f1257ac0fe74aed240e6bf041bfb4d061eab89acf04d4df810b50f2a)\n5f90cba0f1257ac0fe74aed240e6bf041bfb4d061eab89acf04d4df810b50f2a\nSat Oct 11 03:52:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 (5f90cba0f1257ac0fe74aed240e6bf041bfb4d061eab89acf04d4df810b50f2a)\n5f90cba0f1257ac0fe74aed240e6bf041bfb4d061eab89acf04d4df810b50f2a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:40 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:40.502 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e101bd-3ce3-41e0-92f9-758cf57cdb99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:40 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:40.503 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359720eb-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:40 np0005480824 kernel: tap359720eb-a0: left promiscuous mode
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:40 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:40.513 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e17805-3b89-4a10-b5ee-f732bf2409fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:40 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:40.543 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7d7100-cfb2-4168-b5e0-46287a45789c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:40 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:40.545 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e46505b0-3421-4085-a455-ef2c32237ba0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:40 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:40.568 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[5620d775-c08f-461a-82ab-432982859b22]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422091, 'reachable_time': 15643, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283441, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:40 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:40.570 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:52:40 np0005480824 systemd[1]: run-netns-ovnmeta\x2d359720eb\x2da957\x2d4bcd\x2db9b2\x2d3cf7dad947e4.mount: Deactivated successfully.
Oct 10 23:52:40 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:40.571 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[058defb3-dc39-407c-8ce7-d1ce9855c9c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.602 2 INFO nova.virt.libvirt.driver [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Deleting instance files /var/lib/nova/instances/92d928f8-a506-405f-8aa4-6d539d08e00f_del#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.603 2 INFO nova.virt.libvirt.driver [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Deletion of /var/lib/nova/instances/92d928f8-a506-405f-8aa4-6d539d08e00f_del complete#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.655 2 INFO nova.compute.manager [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Took 0.55 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.656 2 DEBUG oslo.service.loopingcall [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.656 2 DEBUG nova.compute.manager [-] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:52:40 np0005480824 nova_compute[260089]: 2025-10-11 03:52:40.657 2 DEBUG nova.network.neutron [-] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:52:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 295 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 2.0 MiB/s wr, 84 op/s
Oct 10 23:52:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:41.102 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:52:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:41.104 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:52:41 np0005480824 nova_compute[260089]: 2025-10-11 03:52:41.126 2 DEBUG nova.network.neutron [-] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:52:41 np0005480824 nova_compute[260089]: 2025-10-11 03:52:41.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:41 np0005480824 nova_compute[260089]: 2025-10-11 03:52:41.152 2 INFO nova.compute.manager [-] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Took 0.50 seconds to deallocate network for instance.#033[00m
Oct 10 23:52:41 np0005480824 nova_compute[260089]: 2025-10-11 03:52:41.203 2 DEBUG nova.compute.manager [req-0c16bfb3-2ab2-483f-8205-44493bbaa817 req-e5a10aa3-532a-4804-81e1-7c7f9ceffe72 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Received event network-vif-deleted-1c319219-245c-424e-8e32-c111069f8e63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:41 np0005480824 nova_compute[260089]: 2025-10-11 03:52:41.320 2 INFO nova.compute.manager [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Took 0.17 seconds to detach 1 volumes for instance.#033[00m
Oct 10 23:52:41 np0005480824 nova_compute[260089]: 2025-10-11 03:52:41.363 2 DEBUG oslo_concurrency.lockutils [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:41 np0005480824 nova_compute[260089]: 2025-10-11 03:52:41.364 2 DEBUG oslo_concurrency.lockutils [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:41 np0005480824 nova_compute[260089]: 2025-10-11 03:52:41.462 2 DEBUG oslo_concurrency.processutils [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:52:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352292324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:52:41 np0005480824 nova_compute[260089]: 2025-10-11 03:52:41.952 2 DEBUG oslo_concurrency.processutils [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:41 np0005480824 nova_compute[260089]: 2025-10-11 03:52:41.959 2 DEBUG nova.compute.provider_tree [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:52:41 np0005480824 nova_compute[260089]: 2025-10-11 03:52:41.973 2 DEBUG nova.scheduler.client.report [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:52:41 np0005480824 nova_compute[260089]: 2025-10-11 03:52:41.996 2 DEBUG oslo_concurrency.lockutils [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:42 np0005480824 nova_compute[260089]: 2025-10-11 03:52:42.015 2 INFO nova.scheduler.client.report [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Deleted allocations for instance 92d928f8-a506-405f-8aa4-6d539d08e00f#033[00m
Oct 10 23:52:42 np0005480824 nova_compute[260089]: 2025-10-11 03:52:42.086 2 DEBUG oslo_concurrency.lockutils [None req-6cd9dce5-c47d-43cf-9780-dba8552790f7 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 295 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Oct 10 23:52:42 np0005480824 nova_compute[260089]: 2025-10-11 03:52:42.725 2 DEBUG nova.compute.manager [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Received event network-vif-plugged-1c319219-245c-424e-8e32-c111069f8e63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:42 np0005480824 nova_compute[260089]: 2025-10-11 03:52:42.726 2 DEBUG oslo_concurrency.lockutils [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:42 np0005480824 nova_compute[260089]: 2025-10-11 03:52:42.727 2 DEBUG oslo_concurrency.lockutils [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:42 np0005480824 nova_compute[260089]: 2025-10-11 03:52:42.727 2 DEBUG oslo_concurrency.lockutils [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "92d928f8-a506-405f-8aa4-6d539d08e00f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:42 np0005480824 nova_compute[260089]: 2025-10-11 03:52:42.728 2 DEBUG nova.compute.manager [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] No waiting events found dispatching network-vif-plugged-1c319219-245c-424e-8e32-c111069f8e63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:52:42 np0005480824 nova_compute[260089]: 2025-10-11 03:52:42.728 2 WARNING nova.compute.manager [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Received unexpected event network-vif-plugged-1c319219-245c-424e-8e32-c111069f8e63 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 23:52:42 np0005480824 nova_compute[260089]: 2025-10-11 03:52:42.729 2 DEBUG nova.compute.manager [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Received event network-changed-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:42 np0005480824 nova_compute[260089]: 2025-10-11 03:52:42.729 2 DEBUG nova.compute.manager [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Refreshing instance network info cache due to event network-changed-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:52:42 np0005480824 nova_compute[260089]: 2025-10-11 03:52:42.730 2 DEBUG oslo_concurrency.lockutils [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-3b8741f5-afdc-4745-b74c-2578bc643be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:52:42 np0005480824 nova_compute[260089]: 2025-10-11 03:52:42.730 2 DEBUG oslo_concurrency.lockutils [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-3b8741f5-afdc-4745-b74c-2578bc643be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:52:42 np0005480824 nova_compute[260089]: 2025-10-11 03:52:42.731 2 DEBUG nova.network.neutron [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Refreshing network info cache for port 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:52:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:52:43.107 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:52:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:52:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4260308791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:52:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:52:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4260308791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:52:43 np0005480824 nova_compute[260089]: 2025-10-11 03:52:43.656 2 DEBUG nova.network.neutron [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Updated VIF entry in instance network info cache for port 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:52:43 np0005480824 nova_compute[260089]: 2025-10-11 03:52:43.657 2 DEBUG nova.network.neutron [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Updating instance_info_cache with network_info: [{"id": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "address": "fa:16:3e:0d:51:d8", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ef1c20b-95", "ovs_interfaceid": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:52:43 np0005480824 nova_compute[260089]: 2025-10-11 03:52:43.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:43 np0005480824 nova_compute[260089]: 2025-10-11 03:52:43.676 2 DEBUG oslo_concurrency.lockutils [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-3b8741f5-afdc-4745-b74c-2578bc643be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:52:43 np0005480824 nova_compute[260089]: 2025-10-11 03:52:43.676 2 DEBUG nova.compute.manager [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Received event network-changed-5af98ddd-2cff-4fe8-abcf-414110faa17d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:43 np0005480824 nova_compute[260089]: 2025-10-11 03:52:43.676 2 DEBUG nova.compute.manager [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Refreshing instance network info cache due to event network-changed-5af98ddd-2cff-4fe8-abcf-414110faa17d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:52:43 np0005480824 nova_compute[260089]: 2025-10-11 03:52:43.677 2 DEBUG oslo_concurrency.lockutils [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-7452e9a5-0e1b-4c0c-816b-57e0ea976747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:52:43 np0005480824 nova_compute[260089]: 2025-10-11 03:52:43.677 2 DEBUG oslo_concurrency.lockutils [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-7452e9a5-0e1b-4c0c-816b-57e0ea976747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:52:43 np0005480824 nova_compute[260089]: 2025-10-11 03:52:43.677 2 DEBUG nova.network.neutron [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Refreshing network info cache for port 5af98ddd-2cff-4fe8-abcf-414110faa17d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:52:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:52:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 295 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 40 KiB/s wr, 173 op/s
Oct 10 23:52:45 np0005480824 nova_compute[260089]: 2025-10-11 03:52:45.393 2 DEBUG nova.network.neutron [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Updated VIF entry in instance network info cache for port 5af98ddd-2cff-4fe8-abcf-414110faa17d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:52:45 np0005480824 nova_compute[260089]: 2025-10-11 03:52:45.394 2 DEBUG nova.network.neutron [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Updating instance_info_cache with network_info: [{"id": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "address": "fa:16:3e:79:18:58", "network": {"id": "1ac3beb3-eeb0-47be-b56e-672742cfe517", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1691573125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a65cd418eaad4366991b123d6535a576", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5af98ddd-2c", "ovs_interfaceid": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:52:45 np0005480824 nova_compute[260089]: 2025-10-11 03:52:45.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:45 np0005480824 nova_compute[260089]: 2025-10-11 03:52:45.431 2 DEBUG oslo_concurrency.lockutils [req-db0d61e7-8f06-41a7-86e8-af7de8d660f0 req-314bdf96-859c-4354-b094-f1bafe04c561 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-7452e9a5-0e1b-4c0c-816b-57e0ea976747" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:52:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 295 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 40 KiB/s wr, 173 op/s
Oct 10 23:52:48 np0005480824 nova_compute[260089]: 2025-10-11 03:52:48.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 295 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 41 KiB/s wr, 195 op/s
Oct 10 23:52:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:52:50 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:50Z|00016|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.11 does not match offer 10.100.0.5
Oct 10 23:52:50 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:50Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:0d:51:d8 10.100.0.5
Oct 10 23:52:50 np0005480824 nova_compute[260089]: 2025-10-11 03:52:50.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 295 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 16 KiB/s wr, 172 op/s
Oct 10 23:52:51 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:51Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:79:18:58 10.100.0.6
Oct 10 23:52:51 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:51Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:79:18:58 10.100.0.6
Oct 10 23:52:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Oct 10 23:52:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Oct 10 23:52:52 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Oct 10 23:52:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 388 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.3 MiB/s wr, 216 op/s
Oct 10 23:52:53 np0005480824 nova_compute[260089]: 2025-10-11 03:52:53.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:54 np0005480824 nova_compute[260089]: 2025-10-11 03:52:54.599 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "0f4ead16-8af5-427a-9543-772b0c23733d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:54 np0005480824 nova_compute[260089]: 2025-10-11 03:52:54.603 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:54 np0005480824 nova_compute[260089]: 2025-10-11 03:52:54.626 2 DEBUG nova.compute.manager [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:52:54 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:54Z|00020|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.11 does not match offer 10.100.0.5
Oct 10 23:52:54 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:54Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:0d:51:d8 10.100.0.5
Oct 10 23:52:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:52:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 388 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.3 MiB/s wr, 216 op/s
Oct 10 23:52:54 np0005480824 nova_compute[260089]: 2025-10-11 03:52:54.720 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:54 np0005480824 nova_compute[260089]: 2025-10-11 03:52:54.721 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:54 np0005480824 nova_compute[260089]: 2025-10-11 03:52:54.733 2 DEBUG nova.virt.hardware [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:52:54 np0005480824 nova_compute[260089]: 2025-10-11 03:52:54.735 2 INFO nova.compute.claims [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:52:54 np0005480824 nova_compute[260089]: 2025-10-11 03:52:54.882 2 DEBUG oslo_concurrency.processutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:55 np0005480824 podman[283656]: 2025-10-11 03:52:55.247761633 +0000 UTC m=+0.096539008 container exec a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.340 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154760.337231, 92d928f8-a506-405f-8aa4-6d539d08e00f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.342 2 INFO nova.compute.manager [-] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:52:55 np0005480824 podman[283656]: 2025-10-11 03:52:55.346278186 +0000 UTC m=+0.195055521 container exec_died a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:52:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:52:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3831965720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.364 2 DEBUG nova.compute.manager [None req-0d770d8a-2887-4b50-ae9f-01c3106cab63 - - - - - -] [instance: 92d928f8-a506-405f-8aa4-6d539d08e00f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.380 2 DEBUG oslo_concurrency.processutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:55 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:55Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0d:51:d8 10.100.0.5
Oct 10 23:52:55 np0005480824 ovn_controller[152667]: 2025-10-11T03:52:55Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0d:51:d8 10.100.0.5
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.393 2 DEBUG nova.compute.provider_tree [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.412 2 DEBUG nova.scheduler.client.report [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.440 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.441 2 DEBUG nova.compute.manager [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.488 2 DEBUG nova.compute.manager [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.489 2 DEBUG nova.network.neutron [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.511 2 INFO nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:52:55 np0005480824 podman[283694]: 2025-10-11 03:52:55.541704074 +0000 UTC m=+0.099838284 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.549 2 DEBUG nova.compute.manager [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:52:55 np0005480824 podman[283693]: 2025-10-11 03:52:55.5521772 +0000 UTC m=+0.110355741 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.595 2 INFO nova.virt.block_device [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Booting with volume snapshot 7103ce88-301c-499a-89c6-df6240b8d344 at /dev/vda#033[00m
Oct 10 23:52:55 np0005480824 nova_compute[260089]: 2025-10-11 03:52:55.986 2 DEBUG nova.policy [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38ebc503771e417aaf1f3aea0c835994', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55d21391a321476eb133317b3402b0f0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:52:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:52:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:52:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:52:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:52:56 np0005480824 nova_compute[260089]: 2025-10-11 03:52:56.543 2 DEBUG nova.network.neutron [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Successfully created port: d24beac6-fe81-4cb4-b500-c4446f3106b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:52:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 388 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.3 MiB/s wr, 216 op/s
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.200 2 DEBUG oslo_concurrency.lockutils [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Acquiring lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.200 2 DEBUG oslo_concurrency.lockutils [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.219 2 DEBUG nova.objects.instance [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lazy-loading 'flavor' on Instance uuid 7452e9a5-0e1b-4c0c-816b-57e0ea976747 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.237 2 INFO nova.virt.libvirt.driver [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Ignoring supplied device name: /dev/vdb#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.254 2 DEBUG oslo_concurrency.lockutils [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.406 2 DEBUG nova.network.neutron [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Successfully updated port: d24beac6-fe81-4cb4-b500-c4446f3106b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.424 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "refresh_cache-0f4ead16-8af5-427a-9543-772b0c23733d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.424 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquired lock "refresh_cache-0f4ead16-8af5-427a-9543-772b0c23733d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.425 2 DEBUG nova.network.neutron [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.426 2 DEBUG oslo_concurrency.lockutils [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Acquiring lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.426 2 DEBUG oslo_concurrency.lockutils [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.426 2 INFO nova.compute.manager [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Attaching volume a1b7939b-e611-4ad8-827a-3e86d5e9be68 to /dev/vdb#033[00m
Oct 10 23:52:57 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:52:57 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.496 2 DEBUG nova.compute.manager [req-40096e50-8810-474b-b0f7-b480b385aaa7 req-b81162f2-ae5a-4827-860d-1f35dc0a17f2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Received event network-changed-d24beac6-fe81-4cb4-b500-c4446f3106b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.497 2 DEBUG nova.compute.manager [req-40096e50-8810-474b-b0f7-b480b385aaa7 req-b81162f2-ae5a-4827-860d-1f35dc0a17f2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Refreshing instance network info cache due to event network-changed-d24beac6-fe81-4cb4-b500-c4446f3106b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.497 2 DEBUG oslo_concurrency.lockutils [req-40096e50-8810-474b-b0f7-b480b385aaa7 req-b81162f2-ae5a-4827-860d-1f35dc0a17f2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-0f4ead16-8af5-427a-9543-772b0c23733d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.540 2 DEBUG os_brick.utils [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.542 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.555 2 DEBUG nova.network.neutron [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.553 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.554 676 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2000e8-2fc7-47ce-a386-c97909fba2e1]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.561 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.571 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.571 676 DEBUG oslo.privsep.daemon [-] privsep: reply[d66a8fb5-8b92-43db-9b11-2680e72e32da]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.573 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.583 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.584 676 DEBUG oslo.privsep.daemon [-] privsep: reply[2de08b24-da43-40eb-8bcb-5c6b26b22ded]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.585 676 DEBUG oslo.privsep.daemon [-] privsep: reply[97f245b5-3bd5-4705-8fdc-69776123a224]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.585 2 DEBUG oslo_concurrency.processutils [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:52:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.616 2 DEBUG oslo_concurrency.processutils [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:52:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.619 2 DEBUG os_brick.initiator.connectors.lightos [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.619 2 DEBUG os_brick.initiator.connectors.lightos [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.619 2 DEBUG os_brick.initiator.connectors.lightos [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.620 2 DEBUG os_brick.utils [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:52:57 np0005480824 nova_compute[260089]: 2025-10-11 03:52:57.620 2 DEBUG nova.virt.block_device [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Updating existing volume attachment record: 9a12f1fc-0055-414a-b479-a6899cd8bfc0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:52:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:52:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:52:57 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev fa081517-b1b4-403f-9524-8d29e3c22651 does not exist
Oct 10 23:52:57 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 51f8167f-8ee1-47ec-9eaf-ddb2b37562bf does not exist
Oct 10 23:52:57 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev e6dcc39c-d844-4451-97e9-757e40a2f42b does not exist
Oct 10 23:52:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:52:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:52:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:52:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:52:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:52:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:52:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:52:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:52:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:52:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:52:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:52:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:52:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:52:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1294839960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:52:58 np0005480824 nova_compute[260089]: 2025-10-11 03:52:58.275 2 DEBUG nova.objects.instance [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lazy-loading 'flavor' on Instance uuid 7452e9a5-0e1b-4c0c-816b-57e0ea976747 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:52:58 np0005480824 nova_compute[260089]: 2025-10-11 03:52:58.307 2 DEBUG nova.virt.libvirt.driver [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Attempting to attach volume a1b7939b-e611-4ad8-827a-3e86d5e9be68 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct 10 23:52:58 np0005480824 nova_compute[260089]: 2025-10-11 03:52:58.312 2 DEBUG nova.virt.libvirt.guest [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] attach device xml: <disk type="network" device="disk">
Oct 10 23:52:58 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:52:58 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-a1b7939b-e611-4ad8-827a-3e86d5e9be68">
Oct 10 23:52:58 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:52:58 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:52:58 np0005480824 nova_compute[260089]:  <auth username="openstack">
Oct 10 23:52:58 np0005480824 nova_compute[260089]:    <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:52:58 np0005480824 nova_compute[260089]:  </auth>
Oct 10 23:52:58 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:52:58 np0005480824 nova_compute[260089]:  <serial>a1b7939b-e611-4ad8-827a-3e86d5e9be68</serial>
Oct 10 23:52:58 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:52:58 np0005480824 nova_compute[260089]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 10 23:52:58 np0005480824 nova_compute[260089]: 2025-10-11 03:52:58.463 2 DEBUG nova.virt.libvirt.driver [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:52:58 np0005480824 nova_compute[260089]: 2025-10-11 03:52:58.464 2 DEBUG nova.virt.libvirt.driver [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:52:58 np0005480824 nova_compute[260089]: 2025-10-11 03:52:58.464 2 DEBUG nova.virt.libvirt.driver [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:52:58 np0005480824 nova_compute[260089]: 2025-10-11 03:52:58.465 2 DEBUG nova.virt.libvirt.driver [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] No VIF found with MAC fa:16:3e:79:18:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:52:58 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:52:58 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:52:58 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:52:58 np0005480824 nova_compute[260089]: 2025-10-11 03:52:58.514 2 DEBUG nova.network.neutron [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Updating instance_info_cache with network_info: [{"id": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "address": "fa:16:3e:26:e2:d3", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd24beac6-fe", "ovs_interfaceid": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:52:58 np0005480824 nova_compute[260089]: 2025-10-11 03:52:58.535 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Releasing lock "refresh_cache-0f4ead16-8af5-427a-9543-772b0c23733d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:52:58 np0005480824 nova_compute[260089]: 2025-10-11 03:52:58.536 2 DEBUG nova.compute.manager [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Instance network_info: |[{"id": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "address": "fa:16:3e:26:e2:d3", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd24beac6-fe", "ovs_interfaceid": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:52:58 np0005480824 nova_compute[260089]: 2025-10-11 03:52:58.536 2 DEBUG oslo_concurrency.lockutils [req-40096e50-8810-474b-b0f7-b480b385aaa7 req-b81162f2-ae5a-4827-860d-1f35dc0a17f2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-0f4ead16-8af5-427a-9543-772b0c23733d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:52:58 np0005480824 nova_compute[260089]: 2025-10-11 03:52:58.537 2 DEBUG nova.network.neutron [req-40096e50-8810-474b-b0f7-b480b385aaa7 req-b81162f2-ae5a-4827-860d-1f35dc0a17f2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Refreshing network info cache for port d24beac6-fe81-4cb4-b500-c4446f3106b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:52:58 np0005480824 podman[284154]: 2025-10-11 03:52:58.559366064 +0000 UTC m=+0.094734566 container create 7fe3a392c888a9df933b8f46e5647a2152fac71226b3ae6878324810b208a3f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_johnson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 10 23:52:58 np0005480824 podman[284154]: 2025-10-11 03:52:58.527449654 +0000 UTC m=+0.062818206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:52:58 np0005480824 systemd[1]: Started libpod-conmon-7fe3a392c888a9df933b8f46e5647a2152fac71226b3ae6878324810b208a3f6.scope.
Oct 10 23:52:58 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:52:58 np0005480824 nova_compute[260089]: 2025-10-11 03:52:58.655 2 DEBUG oslo_concurrency.lockutils [None req-583c72b3-ed1c-4c7d-b40f-fc980ffabf66 cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:52:58 np0005480824 podman[284154]: 2025-10-11 03:52:58.677049466 +0000 UTC m=+0.212417958 container init 7fe3a392c888a9df933b8f46e5647a2152fac71226b3ae6878324810b208a3f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:52:58 np0005480824 nova_compute[260089]: 2025-10-11 03:52:58.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:52:58 np0005480824 podman[284154]: 2025-10-11 03:52:58.684912681 +0000 UTC m=+0.220281153 container start 7fe3a392c888a9df933b8f46e5647a2152fac71226b3ae6878324810b208a3f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_johnson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:52:58 np0005480824 podman[284154]: 2025-10-11 03:52:58.688563417 +0000 UTC m=+0.223931919 container attach 7fe3a392c888a9df933b8f46e5647a2152fac71226b3ae6878324810b208a3f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_johnson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:52:58 np0005480824 frosty_johnson[284171]: 167 167
Oct 10 23:52:58 np0005480824 systemd[1]: libpod-7fe3a392c888a9df933b8f46e5647a2152fac71226b3ae6878324810b208a3f6.scope: Deactivated successfully.
Oct 10 23:52:58 np0005480824 podman[284154]: 2025-10-11 03:52:58.695963641 +0000 UTC m=+0.231332103 container died 7fe3a392c888a9df933b8f46e5647a2152fac71226b3ae6878324810b208a3f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_johnson, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 10 23:52:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 391 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.3 MiB/s wr, 206 op/s
Oct 10 23:52:58 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e7ee18d4d4a8b9350c5642766fbd03b23283fa17d8f51f2a80b372b1ed61be82-merged.mount: Deactivated successfully.
Oct 10 23:52:58 np0005480824 podman[284154]: 2025-10-11 03:52:58.740673561 +0000 UTC m=+0.276042033 container remove 7fe3a392c888a9df933b8f46e5647a2152fac71226b3ae6878324810b208a3f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_johnson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 23:52:58 np0005480824 systemd[1]: libpod-conmon-7fe3a392c888a9df933b8f46e5647a2152fac71226b3ae6878324810b208a3f6.scope: Deactivated successfully.
Oct 10 23:52:58 np0005480824 podman[284194]: 2025-10-11 03:52:58.948354986 +0000 UTC m=+0.061543316 container create f1c438eead89d8050b5cc8af5e76fedd29f9113a1380797344800b4878200c06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kare, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 23:52:59 np0005480824 systemd[1]: Started libpod-conmon-f1c438eead89d8050b5cc8af5e76fedd29f9113a1380797344800b4878200c06.scope.
Oct 10 23:52:59 np0005480824 podman[284194]: 2025-10-11 03:52:58.922010287 +0000 UTC m=+0.035198627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:52:59 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:52:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcdedb5a80ff56650ea3220cac316b48479236087c439e6d84d56a45140450a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:52:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcdedb5a80ff56650ea3220cac316b48479236087c439e6d84d56a45140450a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:52:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcdedb5a80ff56650ea3220cac316b48479236087c439e6d84d56a45140450a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:52:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcdedb5a80ff56650ea3220cac316b48479236087c439e6d84d56a45140450a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:52:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcdedb5a80ff56650ea3220cac316b48479236087c439e6d84d56a45140450a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:52:59 np0005480824 podman[284194]: 2025-10-11 03:52:59.056141617 +0000 UTC m=+0.169329947 container init f1c438eead89d8050b5cc8af5e76fedd29f9113a1380797344800b4878200c06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kare, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:52:59 np0005480824 podman[284194]: 2025-10-11 03:52:59.074249142 +0000 UTC m=+0.187437442 container start f1c438eead89d8050b5cc8af5e76fedd29f9113a1380797344800b4878200c06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:52:59 np0005480824 podman[284194]: 2025-10-11 03:52:59.082066375 +0000 UTC m=+0.195254675 container attach f1c438eead89d8050b5cc8af5e76fedd29f9113a1380797344800b4878200c06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.549 2 DEBUG nova.network.neutron [req-40096e50-8810-474b-b0f7-b480b385aaa7 req-b81162f2-ae5a-4827-860d-1f35dc0a17f2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Updated VIF entry in instance network info cache for port d24beac6-fe81-4cb4-b500-c4446f3106b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.552 2 DEBUG nova.network.neutron [req-40096e50-8810-474b-b0f7-b480b385aaa7 req-b81162f2-ae5a-4827-860d-1f35dc0a17f2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Updating instance_info_cache with network_info: [{"id": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "address": "fa:16:3e:26:e2:d3", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd24beac6-fe", "ovs_interfaceid": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.566 2 DEBUG oslo_concurrency.lockutils [req-40096e50-8810-474b-b0f7-b480b385aaa7 req-b81162f2-ae5a-4827-860d-1f35dc0a17f2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-0f4ead16-8af5-427a-9543-772b0c23733d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.600 2 DEBUG os_brick.utils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.603 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.619 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.619 676 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc9940a-df5b-44cd-8ef8-87d0ccfdf209]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.621 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.646 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.646 676 DEBUG oslo.privsep.daemon [-] privsep: reply[19f7746e-f43d-4e5c-b85c-e5d2e2afe8ba]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.648 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.663 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.663 676 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ebf676-2b85-4b06-b769-2e2283bcc872]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.666 676 DEBUG oslo.privsep.daemon [-] privsep: reply[fba346ca-8a95-49bf-8d1c-d8ffda23f241]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.666 2 DEBUG oslo_concurrency.processutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:52:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.712 2 DEBUG oslo_concurrency.processutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "nvme version" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.717 2 DEBUG os_brick.initiator.connectors.lightos [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.718 2 DEBUG os_brick.initiator.connectors.lightos [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.719 2 DEBUG os_brick.initiator.connectors.lightos [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.720 2 DEBUG os_brick.utils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] <== get_connector_properties: return (118ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:52:59 np0005480824 nova_compute[260089]: 2025-10-11 03:52:59.721 2 DEBUG nova.virt.block_device [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Updating existing volume attachment record: 82c06897-f4ae-46d5-a224-ddff11455bc1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:53:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:53:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3090677432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:53:00 np0005480824 sweet_kare[284211]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:53:00 np0005480824 sweet_kare[284211]: --> relative data size: 1.0
Oct 10 23:53:00 np0005480824 sweet_kare[284211]: --> All data devices are unavailable
Oct 10 23:53:00 np0005480824 systemd[1]: libpod-f1c438eead89d8050b5cc8af5e76fedd29f9113a1380797344800b4878200c06.scope: Deactivated successfully.
Oct 10 23:53:00 np0005480824 systemd[1]: libpod-f1c438eead89d8050b5cc8af5e76fedd29f9113a1380797344800b4878200c06.scope: Consumed 1.250s CPU time.
Oct 10 23:53:00 np0005480824 podman[284194]: 2025-10-11 03:53:00.399520177 +0000 UTC m=+1.512708507 container died f1c438eead89d8050b5cc8af5e76fedd29f9113a1380797344800b4878200c06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:00 np0005480824 systemd[1]: var-lib-containers-storage-overlay-dcdedb5a80ff56650ea3220cac316b48479236087c439e6d84d56a45140450a2-merged.mount: Deactivated successfully.
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.440 2 DEBUG nova.compute.manager [req-779b9698-b28a-4806-bc04-6be6b9356fb9 req-f8f600ef-6335-4cfb-a795-a75f6f059663 e164ff95c6c84a77b0287b454f7aa48c 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Received event volume-extended-a1b7939b-e611-4ad8-827a-3e86d5e9be68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.458 2 DEBUG nova.compute.manager [req-779b9698-b28a-4806-bc04-6be6b9356fb9 req-f8f600ef-6335-4cfb-a795-a75f6f059663 e164ff95c6c84a77b0287b454f7aa48c 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Handling volume-extended event for volume a1b7939b-e611-4ad8-827a-3e86d5e9be68 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896#033[00m
Oct 10 23:53:00 np0005480824 podman[284194]: 2025-10-11 03:53:00.458736657 +0000 UTC m=+1.571924947 container remove f1c438eead89d8050b5cc8af5e76fedd29f9113a1380797344800b4878200c06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct 10 23:53:00 np0005480824 systemd[1]: libpod-conmon-f1c438eead89d8050b5cc8af5e76fedd29f9113a1380797344800b4878200c06.scope: Deactivated successfully.
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.486 2 INFO nova.compute.manager [req-779b9698-b28a-4806-bc04-6be6b9356fb9 req-f8f600ef-6335-4cfb-a795-a75f6f059663 e164ff95c6c84a77b0287b454f7aa48c 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Cinder extended volume a1b7939b-e611-4ad8-827a-3e86d5e9be68; extending it to detect new size#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.645 2 DEBUG nova.virt.libvirt.driver [req-779b9698-b28a-4806-bc04-6be6b9356fb9 req-f8f600ef-6335-4cfb-a795-a75f6f059663 e164ff95c6c84a77b0287b454f7aa48c 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Resizing target device vdb to 2147483648 _resize_attached_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2756#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.674 2 DEBUG nova.compute.manager [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.677 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.678 2 INFO nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Creating image(s)#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.679 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.679 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Ensure instance console log exists: /var/lib/nova/instances/0f4ead16-8af5-427a-9543-772b0c23733d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.680 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.681 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.681 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.686 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Start _get_guest_xml network_info=[{"id": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "address": "fa:16:3e:26:e2:d3", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd24beac6-fe", "ovs_interfaceid": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '82c06897-f4ae-46d5-a224-ddff11455bc1', 'mount_device': '/dev/vda', 'delete_on_termination': True, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-433a45af-bba3-48ac-ab26-868daf44aba6', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '433a45af-bba3-48ac-ab26-868daf44aba6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '0f4ead16-8af5-427a-9543-772b0c23733d', 'attached_at': '', 'detached_at': '', 'volume_id': '433a45af-bba3-48ac-ab26-868daf44aba6', 'serial': '433a45af-bba3-48ac-ab26-868daf44aba6'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.694 2 WARNING nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.700 2 DEBUG nova.virt.libvirt.host [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:53:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 391 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.3 MiB/s wr, 206 op/s
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.702 2 DEBUG nova.virt.libvirt.host [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.706 2 DEBUG nova.virt.libvirt.host [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.707 2 DEBUG nova.virt.libvirt.host [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.707 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.708 2 DEBUG nova.virt.hardware [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.708 2 DEBUG nova.virt.hardware [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.709 2 DEBUG nova.virt.hardware [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.709 2 DEBUG nova.virt.hardware [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.709 2 DEBUG nova.virt.hardware [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.709 2 DEBUG nova.virt.hardware [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.710 2 DEBUG nova.virt.hardware [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.710 2 DEBUG nova.virt.hardware [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.711 2 DEBUG nova.virt.hardware [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.711 2 DEBUG nova.virt.hardware [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.711 2 DEBUG nova.virt.hardware [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.748 2 DEBUG nova.storage.rbd_utils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 0f4ead16-8af5-427a-9543-772b0c23733d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:53:00 np0005480824 nova_compute[260089]: 2025-10-11 03:53:00.753 2 DEBUG oslo_concurrency.processutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:53:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162555081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.204 2 DEBUG oslo_concurrency.processutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.233 2 DEBUG nova.virt.libvirt.vif [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:52:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-885663709',display_name='tempest-TestVolumeBootPattern-server-885663709',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-885663709',id=14,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-84ihe9eh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:52:55Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=0f4ead16-8af5-427a-9543-772b0c23733d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "address": "fa:16:3e:26:e2:d3", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd24beac6-fe", "ovs_interfaceid": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.234 2 DEBUG nova.network.os_vif_util [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "address": "fa:16:3e:26:e2:d3", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd24beac6-fe", "ovs_interfaceid": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.235 2 DEBUG nova.network.os_vif_util [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:e2:d3,bridge_name='br-int',has_traffic_filtering=True,id=d24beac6-fe81-4cb4-b500-c4446f3106b3,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd24beac6-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.236 2 DEBUG nova.objects.instance [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f4ead16-8af5-427a-9543-772b0c23733d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.252 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <uuid>0f4ead16-8af5-427a-9543-772b0c23733d</uuid>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <name>instance-0000000e</name>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <nova:name>tempest-TestVolumeBootPattern-server-885663709</nova:name>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:53:00</nova:creationTime>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:        <nova:user uuid="38ebc503771e417aaf1f3aea0c835994">tempest-TestVolumeBootPattern-739984652-project-member</nova:user>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:        <nova:project uuid="55d21391a321476eb133317b3402b0f0">tempest-TestVolumeBootPattern-739984652</nova:project>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:        <nova:port uuid="d24beac6-fe81-4cb4-b500-c4446f3106b3">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <entry name="serial">0f4ead16-8af5-427a-9543-772b0c23733d</entry>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <entry name="uuid">0f4ead16-8af5-427a-9543-772b0c23733d</entry>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/0f4ead16-8af5-427a-9543-772b0c23733d_disk.config">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-433a45af-bba3-48ac-ab26-868daf44aba6">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <serial>433a45af-bba3-48ac-ab26-868daf44aba6</serial>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:26:e2:d3"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <target dev="tapd24beac6-fe"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/0f4ead16-8af5-427a-9543-772b0c23733d/console.log" append="off"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:53:01 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:53:01 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.252 2 DEBUG nova.compute.manager [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Preparing to wait for external event network-vif-plugged-d24beac6-fe81-4cb4-b500-c4446f3106b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.253 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.253 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.253 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.254 2 DEBUG nova.virt.libvirt.vif [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:52:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-885663709',display_name='tempest-TestVolumeBootPattern-server-885663709',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-885663709',id=14,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-84ihe9eh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:52:55Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=0f4ead16-8af5-427a-9543-772b0c23733d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "address": "fa:16:3e:26:e2:d3", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd24beac6-fe", "ovs_interfaceid": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.254 2 DEBUG nova.network.os_vif_util [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "address": "fa:16:3e:26:e2:d3", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd24beac6-fe", "ovs_interfaceid": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.255 2 DEBUG nova.network.os_vif_util [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:e2:d3,bridge_name='br-int',has_traffic_filtering=True,id=d24beac6-fe81-4cb4-b500-c4446f3106b3,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd24beac6-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.255 2 DEBUG os_vif [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:e2:d3,bridge_name='br-int',has_traffic_filtering=True,id=d24beac6-fe81-4cb4-b500-c4446f3106b3,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd24beac6-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.257 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.257 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.263 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd24beac6-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd24beac6-fe, col_values=(('external_ids', {'iface-id': 'd24beac6-fe81-4cb4-b500-c4446f3106b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:e2:d3', 'vm-uuid': '0f4ead16-8af5-427a-9543-772b0c23733d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:01 np0005480824 NetworkManager[44969]: <info>  [1760154781.2675] manager: (tapd24beac6-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.279 2 INFO os_vif [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:e2:d3,bridge_name='br-int',has_traffic_filtering=True,id=d24beac6-fe81-4cb4-b500-c4446f3106b3,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd24beac6-fe')#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.333 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.333 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.333 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No VIF found with MAC fa:16:3e:26:e2:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.334 2 INFO nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Using config drive#033[00m
Oct 10 23:53:01 np0005480824 podman[284441]: 2025-10-11 03:53:01.341445351 +0000 UTC m=+0.054349577 container create 18abeb6b9e45a4c5014b4bfffc8a0d8065b74ebdd55dc4e55d470d650905fa2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jones, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.357 2 DEBUG nova.storage.rbd_utils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 0f4ead16-8af5-427a-9543-772b0c23733d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:53:01 np0005480824 systemd[1]: Started libpod-conmon-18abeb6b9e45a4c5014b4bfffc8a0d8065b74ebdd55dc4e55d470d650905fa2e.scope.
Oct 10 23:53:01 np0005480824 podman[284441]: 2025-10-11 03:53:01.315869641 +0000 UTC m=+0.028773887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:53:01 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:53:01 np0005480824 podman[284441]: 2025-10-11 03:53:01.464231224 +0000 UTC m=+0.177135490 container init 18abeb6b9e45a4c5014b4bfffc8a0d8065b74ebdd55dc4e55d470d650905fa2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:53:01 np0005480824 podman[284441]: 2025-10-11 03:53:01.477015154 +0000 UTC m=+0.189919380 container start 18abeb6b9e45a4c5014b4bfffc8a0d8065b74ebdd55dc4e55d470d650905fa2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:53:01 np0005480824 podman[284441]: 2025-10-11 03:53:01.484726565 +0000 UTC m=+0.197630801 container attach 18abeb6b9e45a4c5014b4bfffc8a0d8065b74ebdd55dc4e55d470d650905fa2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jones, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:53:01 np0005480824 sharp_jones[284478]: 167 167
Oct 10 23:53:01 np0005480824 systemd[1]: libpod-18abeb6b9e45a4c5014b4bfffc8a0d8065b74ebdd55dc4e55d470d650905fa2e.scope: Deactivated successfully.
Oct 10 23:53:01 np0005480824 podman[284441]: 2025-10-11 03:53:01.486699602 +0000 UTC m=+0.199603858 container died 18abeb6b9e45a4c5014b4bfffc8a0d8065b74ebdd55dc4e55d470d650905fa2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jones, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 10 23:53:01 np0005480824 systemd[1]: var-lib-containers-storage-overlay-906b0f0d619ce509a47ca59d983504997bb29517a3477d46f26380cad3ff37c4-merged.mount: Deactivated successfully.
Oct 10 23:53:01 np0005480824 podman[284441]: 2025-10-11 03:53:01.548865511 +0000 UTC m=+0.261769747 container remove 18abeb6b9e45a4c5014b4bfffc8a0d8065b74ebdd55dc4e55d470d650905fa2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jones, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:53:01 np0005480824 systemd[1]: libpod-conmon-18abeb6b9e45a4c5014b4bfffc8a0d8065b74ebdd55dc4e55d470d650905fa2e.scope: Deactivated successfully.
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.649 2 INFO nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Creating config drive at /var/lib/nova/instances/0f4ead16-8af5-427a-9543-772b0c23733d/disk.config#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.659 2 DEBUG oslo_concurrency.processutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f4ead16-8af5-427a-9543-772b0c23733d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsu2wyfa5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.721 2 DEBUG oslo_concurrency.lockutils [None req-53ee810c-0d8e-4156-afa7-efbc3fe8287c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Acquiring lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.722 2 DEBUG oslo_concurrency.lockutils [None req-53ee810c-0d8e-4156-afa7-efbc3fe8287c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.739 2 INFO nova.compute.manager [None req-53ee810c-0d8e-4156-afa7-efbc3fe8287c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Detaching volume a1b7939b-e611-4ad8-827a-3e86d5e9be68#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.815 2 DEBUG oslo_concurrency.processutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f4ead16-8af5-427a-9543-772b0c23733d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsu2wyfa5" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:01 np0005480824 podman[284504]: 2025-10-11 03:53:01.825390274 +0000 UTC m=+0.069508383 container create 4607069c9e27a75b781ddedf2ebbbeb0cb8bcdfd1dc6bfe2951d231142f016fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nightingale, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.856 2 DEBUG nova.storage.rbd_utils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 0f4ead16-8af5-427a-9543-772b0c23733d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.864 2 DEBUG oslo_concurrency.processutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f4ead16-8af5-427a-9543-772b0c23733d/disk.config 0f4ead16-8af5-427a-9543-772b0c23733d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:01 np0005480824 podman[284504]: 2025-10-11 03:53:01.797680513 +0000 UTC m=+0.041798662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:53:01 np0005480824 systemd[1]: Started libpod-conmon-4607069c9e27a75b781ddedf2ebbbeb0cb8bcdfd1dc6bfe2951d231142f016fb.scope.
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.903 2 INFO nova.virt.block_device [None req-53ee810c-0d8e-4156-afa7-efbc3fe8287c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Attempting to driver detach volume a1b7939b-e611-4ad8-827a-3e86d5e9be68 from mountpoint /dev/vdb#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.920 2 DEBUG nova.virt.libvirt.driver [None req-53ee810c-0d8e-4156-afa7-efbc3fe8287c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Attempting to detach device vdb from instance 7452e9a5-0e1b-4c0c-816b-57e0ea976747 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.921 2 DEBUG nova.virt.libvirt.guest [None req-53ee810c-0d8e-4156-afa7-efbc3fe8287c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-a1b7939b-e611-4ad8-827a-3e86d5e9be68">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <serial>a1b7939b-e611-4ad8-827a-3e86d5e9be68</serial>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:53:01 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.930 2 INFO nova.virt.libvirt.driver [None req-53ee810c-0d8e-4156-afa7-efbc3fe8287c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Successfully detached device vdb from instance 7452e9a5-0e1b-4c0c-816b-57e0ea976747 from the persistent domain config.#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.931 2 DEBUG nova.virt.libvirt.driver [None req-53ee810c-0d8e-4156-afa7-efbc3fe8287c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 7452e9a5-0e1b-4c0c-816b-57e0ea976747 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 10 23:53:01 np0005480824 nova_compute[260089]: 2025-10-11 03:53:01.931 2 DEBUG nova.virt.libvirt.guest [None req-53ee810c-0d8e-4156-afa7-efbc3fe8287c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-a1b7939b-e611-4ad8-827a-3e86d5e9be68">
Oct 10 23:53:01 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <serial>a1b7939b-e611-4ad8-827a-3e86d5e9be68</serial>
Oct 10 23:53:01 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:53:01 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:53:01 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:53:01 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:53:01 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d417aa86c08e29038017ea2ec574eafbdad92b55730bf6bbc6cf0a54c53c837/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:53:01 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d417aa86c08e29038017ea2ec574eafbdad92b55730bf6bbc6cf0a54c53c837/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:53:01 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d417aa86c08e29038017ea2ec574eafbdad92b55730bf6bbc6cf0a54c53c837/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:53:01 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d417aa86c08e29038017ea2ec574eafbdad92b55730bf6bbc6cf0a54c53c837/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:53:01 np0005480824 podman[284504]: 2025-10-11 03:53:01.965470973 +0000 UTC m=+0.209589092 container init 4607069c9e27a75b781ddedf2ebbbeb0cb8bcdfd1dc6bfe2951d231142f016fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nightingale, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 10 23:53:01 np0005480824 podman[284504]: 2025-10-11 03:53:01.97812119 +0000 UTC m=+0.222239289 container start 4607069c9e27a75b781ddedf2ebbbeb0cb8bcdfd1dc6bfe2951d231142f016fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:53:01 np0005480824 podman[284504]: 2025-10-11 03:53:01.986464816 +0000 UTC m=+0.230582945 container attach 4607069c9e27a75b781ddedf2ebbbeb0cb8bcdfd1dc6bfe2951d231142f016fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.042 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Received event <DeviceRemovedEvent: 1760154782.0403998, 7452e9a5-0e1b-4c0c-816b-57e0ea976747 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.043 2 DEBUG nova.virt.libvirt.driver [None req-53ee810c-0d8e-4156-afa7-efbc3fe8287c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 7452e9a5-0e1b-4c0c-816b-57e0ea976747 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.045 2 INFO nova.virt.libvirt.driver [None req-53ee810c-0d8e-4156-afa7-efbc3fe8287c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Successfully detached device vdb from instance 7452e9a5-0e1b-4c0c-816b-57e0ea976747 from the live domain config.#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.086 2 DEBUG oslo_concurrency.processutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f4ead16-8af5-427a-9543-772b0c23733d/disk.config 0f4ead16-8af5-427a-9543-772b0c23733d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.087 2 INFO nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Deleting local config drive /var/lib/nova/instances/0f4ead16-8af5-427a-9543-772b0c23733d/disk.config because it was imported into RBD.#033[00m
Oct 10 23:53:02 np0005480824 kernel: tapd24beac6-fe: entered promiscuous mode
Oct 10 23:53:02 np0005480824 NetworkManager[44969]: <info>  [1760154782.1520] manager: (tapd24beac6-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Oct 10 23:53:02 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:02Z|00120|binding|INFO|Claiming lport d24beac6-fe81-4cb4-b500-c4446f3106b3 for this chassis.
Oct 10 23:53:02 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:02Z|00121|binding|INFO|d24beac6-fe81-4cb4-b500-c4446f3106b3: Claiming fa:16:3e:26:e2:d3 10.100.0.9
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.165 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:e2:d3 10.100.0.9'], port_security=['fa:16:3e:26:e2:d3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0f4ead16-8af5-427a-9543-772b0c23733d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d21391a321476eb133317b3402b0f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fbe6632-cce1-48fb-95c1-bed1096fc071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d98e64fb-092d-4777-b741-426f3e849bc3, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=d24beac6-fe81-4cb4-b500-c4446f3106b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.167 162245 INFO neutron.agent.ovn.metadata.agent [-] Port d24beac6-fe81-4cb4-b500-c4446f3106b3 in datapath 359720eb-a957-4bcd-b9b2-3cf7dad947e4 bound to our chassis#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.174 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 359720eb-a957-4bcd-b9b2-3cf7dad947e4#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.184 2 DEBUG nova.objects.instance [None req-53ee810c-0d8e-4156-afa7-efbc3fe8287c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lazy-loading 'flavor' on Instance uuid 7452e9a5-0e1b-4c0c-816b-57e0ea976747 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:53:02 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:02Z|00122|binding|INFO|Setting lport d24beac6-fe81-4cb4-b500-c4446f3106b3 ovn-installed in OVS
Oct 10 23:53:02 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:02Z|00123|binding|INFO|Setting lport d24beac6-fe81-4cb4-b500-c4446f3106b3 up in Southbound
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.190 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[3c32ba15-9f0d-48f5-8d00-0b211994320f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.193 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap359720eb-a1 in ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.197 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap359720eb-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.198 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7b934343-7237-4a9d-80ae-27074faf73d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.201 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8957164b-3d86-4ea9-96d1-24164af517af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 systemd-udevd[284577]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:53:02 np0005480824 systemd-machined[215071]: New machine qemu-14-instance-0000000e.
Oct 10 23:53:02 np0005480824 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.223 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[d412813f-97fe-407a-b712-f33a2bac973b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.226 2 DEBUG oslo_concurrency.lockutils [None req-53ee810c-0d8e-4156-afa7-efbc3fe8287c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:02 np0005480824 NetworkManager[44969]: <info>  [1760154782.2315] device (tapd24beac6-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:53:02 np0005480824 NetworkManager[44969]: <info>  [1760154782.2325] device (tapd24beac6-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.257 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[132a4423-5ae0-48af-9cdb-b0a12326e0d5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.291 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[cfcffded-7d25-4530-9c02-301e7585c2d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 NetworkManager[44969]: <info>  [1760154782.2998] manager: (tap359720eb-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/79)
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.301 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[f891d1ff-dbd1-49cf-bc1d-9dba1c17aad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.353 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[a85c7e2e-7bc3-4324-a5ae-cba756a7bc1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.357 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[4c13c675-8dc1-4e48-b9d3-7e2ebf8d66f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.359 2 DEBUG nova.compute.manager [req-66d47039-da7e-4e5a-94d2-c2d033b13472 req-c09cfcf2-9910-4d85-9586-95e2978c26bd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Received event network-vif-plugged-d24beac6-fe81-4cb4-b500-c4446f3106b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.360 2 DEBUG oslo_concurrency.lockutils [req-66d47039-da7e-4e5a-94d2-c2d033b13472 req-c09cfcf2-9910-4d85-9586-95e2978c26bd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.361 2 DEBUG oslo_concurrency.lockutils [req-66d47039-da7e-4e5a-94d2-c2d033b13472 req-c09cfcf2-9910-4d85-9586-95e2978c26bd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.361 2 DEBUG oslo_concurrency.lockutils [req-66d47039-da7e-4e5a-94d2-c2d033b13472 req-c09cfcf2-9910-4d85-9586-95e2978c26bd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.361 2 DEBUG nova.compute.manager [req-66d47039-da7e-4e5a-94d2-c2d033b13472 req-c09cfcf2-9910-4d85-9586-95e2978c26bd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Processing event network-vif-plugged-d24beac6-fe81-4cb4-b500-c4446f3106b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:53:02 np0005480824 NetworkManager[44969]: <info>  [1760154782.3831] device (tap359720eb-a0): carrier: link connected
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.393 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[ce431e0b-d6f6-4823-86df-4935825e2b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.417 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[6729dab2-c5ae-41c3-b7c0-6ac16cad20a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap359720eb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:90:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425004, 'reachable_time': 15358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284609, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.436 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[269577ae-cb4d-4d6e-91f0-d834ddd643e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:90b3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 425004, 'tstamp': 425004}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284610, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.458 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[556e5b91-e65e-49ae-aa68-99b5cec0902c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap359720eb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:90:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425004, 'reachable_time': 15358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284611, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.493 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[78f83f4f-2a5c-41e7-9eb0-01028b050cd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.557 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[6645ec77-89ea-479a-96b6-55f7b70ca994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.559 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359720eb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.559 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.560 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap359720eb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:02 np0005480824 kernel: tap359720eb-a0: entered promiscuous mode
Oct 10 23:53:02 np0005480824 NetworkManager[44969]: <info>  [1760154782.5628] manager: (tap359720eb-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.566 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap359720eb-a0, col_values=(('external_ids', {'iface-id': '039c7668-0b85-4466-9c66-62531405028d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:02 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:02Z|00124|binding|INFO|Releasing lport 039c7668-0b85-4466-9c66-62531405028d from this chassis (sb_readonly=0)
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.570 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.571 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8ecf1d-bcd4-4cc3-a7cd-890eb36a2ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.572 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-359720eb-a957-4bcd-b9b2-3cf7dad947e4
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 359720eb-a957-4bcd-b9b2-3cf7dad947e4
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:53:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:02.574 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'env', 'PROCESS_TAG=haproxy-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/359720eb-a957-4bcd-b9b2-3cf7dad947e4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:53:02 np0005480824 nova_compute[260089]: 2025-10-11 03:53:02.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 392 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.1 MiB/s wr, 123 op/s
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]: {
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:    "0": [
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:        {
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "devices": [
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "/dev/loop3"
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            ],
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_name": "ceph_lv0",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_size": "21470642176",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "name": "ceph_lv0",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "tags": {
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.cluster_name": "ceph",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.crush_device_class": "",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.encrypted": "0",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.osd_id": "0",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.type": "block",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.vdo": "0"
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            },
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "type": "block",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "vg_name": "ceph_vg0"
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:        }
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:    ],
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:    "1": [
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:        {
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "devices": [
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "/dev/loop4"
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            ],
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_name": "ceph_lv1",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_size": "21470642176",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "name": "ceph_lv1",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "tags": {
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.cluster_name": "ceph",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.crush_device_class": "",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.encrypted": "0",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.osd_id": "1",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.type": "block",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.vdo": "0"
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            },
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "type": "block",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "vg_name": "ceph_vg1"
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:        }
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:    ],
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:    "2": [
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:        {
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "devices": [
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "/dev/loop5"
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            ],
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_name": "ceph_lv2",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_size": "21470642176",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "name": "ceph_lv2",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "tags": {
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.cluster_name": "ceph",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.crush_device_class": "",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.encrypted": "0",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.osd_id": "2",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.type": "block",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:                "ceph.vdo": "0"
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            },
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "type": "block",
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:            "vg_name": "ceph_vg2"
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:        }
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]:    ]
Oct 10 23:53:02 np0005480824 priceless_nightingale[284539]: }
Oct 10 23:53:02 np0005480824 systemd[1]: libpod-4607069c9e27a75b781ddedf2ebbbeb0cb8bcdfd1dc6bfe2951d231142f016fb.scope: Deactivated successfully.
Oct 10 23:53:02 np0005480824 podman[284504]: 2025-10-11 03:53:02.845820132 +0000 UTC m=+1.089938231 container died 4607069c9e27a75b781ddedf2ebbbeb0cb8bcdfd1dc6bfe2951d231142f016fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nightingale, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 10 23:53:02 np0005480824 systemd[1]: var-lib-containers-storage-overlay-7d417aa86c08e29038017ea2ec574eafbdad92b55730bf6bbc6cf0a54c53c837-merged.mount: Deactivated successfully.
Oct 10 23:53:02 np0005480824 podman[284504]: 2025-10-11 03:53:02.906948756 +0000 UTC m=+1.151066845 container remove 4607069c9e27a75b781ddedf2ebbbeb0cb8bcdfd1dc6bfe2951d231142f016fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:53:02 np0005480824 systemd[1]: libpod-conmon-4607069c9e27a75b781ddedf2ebbbeb0cb8bcdfd1dc6bfe2951d231142f016fb.scope: Deactivated successfully.
Oct 10 23:53:02 np0005480824 podman[284658]: 2025-10-11 03:53:02.968499712 +0000 UTC m=+0.059727073 container create a41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 10 23:53:03 np0005480824 systemd[1]: Started libpod-conmon-a41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42.scope.
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.031 2 DEBUG oslo_concurrency.lockutils [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Acquiring lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:03 np0005480824 podman[284658]: 2025-10-11 03:53:02.939931991 +0000 UTC m=+0.031159332 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.032 2 DEBUG oslo_concurrency.lockutils [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.032 2 DEBUG oslo_concurrency.lockutils [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Acquiring lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.032 2 DEBUG oslo_concurrency.lockutils [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.032 2 DEBUG oslo_concurrency.lockutils [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.034 2 INFO nova.compute.manager [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Terminating instance#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.035 2 DEBUG nova.compute.manager [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:53:03 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:53:03 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3a61827ebc1ca38ceee25acec05cd4c3afe8399c17c5b441a6af05da656c40/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:53:03 np0005480824 podman[284658]: 2025-10-11 03:53:03.072402991 +0000 UTC m=+0.163630352 container init a41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 10 23:53:03 np0005480824 podman[284658]: 2025-10-11 03:53:03.080820979 +0000 UTC m=+0.172048310 container start a41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 10 23:53:03 np0005480824 kernel: tap5af98ddd-2c (unregistering): left promiscuous mode
Oct 10 23:53:03 np0005480824 NetworkManager[44969]: <info>  [1760154783.0980] device (tap5af98ddd-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:53:03 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[284698]: [NOTICE]   (284751) : New worker (284761) forked
Oct 10 23:53:03 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[284698]: [NOTICE]   (284751) : Loading success.
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:03Z|00125|binding|INFO|Releasing lport 5af98ddd-2cff-4fe8-abcf-414110faa17d from this chassis (sb_readonly=0)
Oct 10 23:53:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:03Z|00126|binding|INFO|Setting lport 5af98ddd-2cff-4fe8-abcf-414110faa17d down in Southbound
Oct 10 23:53:03 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:03Z|00127|binding|INFO|Removing iface tap5af98ddd-2c ovn-installed in OVS
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:03.172 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:18:58 10.100.0.6'], port_security=['fa:16:3e:79:18:58 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7452e9a5-0e1b-4c0c-816b-57e0ea976747', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ac3beb3-eeb0-47be-b56e-672742cfe517', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a65cd418eaad4366991b123d6535a576', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1583d4a5-79bd-48da-8c70-83dbe437f172', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c58f84d5-6196-4ce5-aee9-a8bfac4d946a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=5af98ddd-2cff-4fe8-abcf-414110faa17d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:03 np0005480824 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 10 23:53:03 np0005480824 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 13.854s CPU time.
Oct 10 23:53:03 np0005480824 systemd-machined[215071]: Machine qemu-13-instance-0000000d terminated.
Oct 10 23:53:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:03.223 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 5af98ddd-2cff-4fe8-abcf-414110faa17d in datapath 1ac3beb3-eeb0-47be-b56e-672742cfe517 unbound from our chassis#033[00m
Oct 10 23:53:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:03.225 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ac3beb3-eeb0-47be-b56e-672742cfe517, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:53:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:03.226 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d41613-3834-43b1-859d-e5f82d5705e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:03.227 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517 namespace which is not needed anymore#033[00m
Oct 10 23:53:03 np0005480824 podman[284713]: 2025-10-11 03:53:03.242145047 +0000 UTC m=+0.174188191 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 23:53:03 np0005480824 NetworkManager[44969]: <info>  [1760154783.2549] manager: (tap5af98ddd-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.278 2 INFO nova.virt.libvirt.driver [-] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Instance destroyed successfully.#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.278 2 DEBUG nova.objects.instance [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lazy-loading 'resources' on Instance uuid 7452e9a5-0e1b-4c0c-816b-57e0ea976747 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.294 2 DEBUG nova.virt.libvirt.vif [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:52:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-2122000193',display_name='tempest-VolumesExtendAttachedTest-instance-2122000193',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-2122000193',id=13,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAB43gKek6h5eWr8uy3dGQ4wGOfOJNWCIFn83OQ1V9D+dUeP1elAFzU/6cuwBFhnCFRlGKa19y6oD8NsYmuKvToMTw3i+pr/atntuAIFJNEtBIzMWZe8V5JBAXH4tBd+aA==',key_name='tempest-keypair-1899177926',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:52:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a65cd418eaad4366991b123d6535a576',ramdisk_id='',reservation_id='r-61y4vi8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesExtendAttachedTest-1964542468',owner_user_name='tempest-VolumesExtendAttachedTest-1964542468-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:52:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cde6845b6b8d482b95a72e38b1db93d3',uuid=7452e9a5-0e1b-4c0c-816b-57e0ea976747,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "address": "fa:16:3e:79:18:58", "network": {"id": "1ac3beb3-eeb0-47be-b56e-672742cfe517", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1691573125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a65cd418eaad4366991b123d6535a576", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5af98ddd-2c", "ovs_interfaceid": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.295 2 DEBUG nova.network.os_vif_util [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Converting VIF {"id": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "address": "fa:16:3e:79:18:58", "network": {"id": "1ac3beb3-eeb0-47be-b56e-672742cfe517", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1691573125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a65cd418eaad4366991b123d6535a576", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5af98ddd-2c", "ovs_interfaceid": "5af98ddd-2cff-4fe8-abcf-414110faa17d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.296 2 DEBUG nova.network.os_vif_util [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:79:18:58,bridge_name='br-int',has_traffic_filtering=True,id=5af98ddd-2cff-4fe8-abcf-414110faa17d,network=Network(1ac3beb3-eeb0-47be-b56e-672742cfe517),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5af98ddd-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.296 2 DEBUG os_vif [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:79:18:58,bridge_name='br-int',has_traffic_filtering=True,id=5af98ddd-2cff-4fe8-abcf-414110faa17d,network=Network(1ac3beb3-eeb0-47be-b56e-672742cfe517),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5af98ddd-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.299 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5af98ddd-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.306 2 INFO os_vif [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:79:18:58,bridge_name='br-int',has_traffic_filtering=True,id=5af98ddd-2cff-4fe8-abcf-414110faa17d,network=Network(1ac3beb3-eeb0-47be-b56e-672742cfe517),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5af98ddd-2c')#033[00m
Oct 10 23:53:03 np0005480824 neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517[283313]: [NOTICE]   (283334) : haproxy version is 2.8.14-c23fe91
Oct 10 23:53:03 np0005480824 neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517[283313]: [NOTICE]   (283334) : path to executable is /usr/sbin/haproxy
Oct 10 23:53:03 np0005480824 neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517[283313]: [WARNING]  (283334) : Exiting Master process...
Oct 10 23:53:03 np0005480824 neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517[283313]: [ALERT]    (283334) : Current worker (283336) exited with code 143 (Terminated)
Oct 10 23:53:03 np0005480824 neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517[283313]: [WARNING]  (283334) : All workers exited. Exiting... (0)
Oct 10 23:53:03 np0005480824 systemd[1]: libpod-de26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236.scope: Deactivated successfully.
Oct 10 23:53:03 np0005480824 conmon[283313]: conmon de26c471c591cb0b453e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236.scope/container/memory.events
Oct 10 23:53:03 np0005480824 podman[284902]: 2025-10-11 03:53:03.400441773 +0000 UTC m=+0.046478953 container died de26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:53:03 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236-userdata-shm.mount: Deactivated successfully.
Oct 10 23:53:03 np0005480824 systemd[1]: var-lib-containers-storage-overlay-19d7f1903ba0ce5add9fa11a7df2be13ae865e2c01ee379d5dbbb93a783fe31e-merged.mount: Deactivated successfully.
Oct 10 23:53:03 np0005480824 podman[284902]: 2025-10-11 03:53:03.453890448 +0000 UTC m=+0.099927628 container cleanup de26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 10 23:53:03 np0005480824 systemd[1]: libpod-conmon-de26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236.scope: Deactivated successfully.
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.465 2 DEBUG nova.compute.manager [req-6b976850-3f21-42f6-9935-ed6ea38b6b76 req-c1963c46-b62d-467d-a146-dc8bba8f4246 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Received event network-vif-unplugged-5af98ddd-2cff-4fe8-abcf-414110faa17d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.466 2 DEBUG oslo_concurrency.lockutils [req-6b976850-3f21-42f6-9935-ed6ea38b6b76 req-c1963c46-b62d-467d-a146-dc8bba8f4246 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.466 2 DEBUG oslo_concurrency.lockutils [req-6b976850-3f21-42f6-9935-ed6ea38b6b76 req-c1963c46-b62d-467d-a146-dc8bba8f4246 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.467 2 DEBUG oslo_concurrency.lockutils [req-6b976850-3f21-42f6-9935-ed6ea38b6b76 req-c1963c46-b62d-467d-a146-dc8bba8f4246 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.467 2 DEBUG nova.compute.manager [req-6b976850-3f21-42f6-9935-ed6ea38b6b76 req-c1963c46-b62d-467d-a146-dc8bba8f4246 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] No waiting events found dispatching network-vif-unplugged-5af98ddd-2cff-4fe8-abcf-414110faa17d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.467 2 DEBUG nova.compute.manager [req-6b976850-3f21-42f6-9935-ed6ea38b6b76 req-c1963c46-b62d-467d-a146-dc8bba8f4246 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Received event network-vif-unplugged-5af98ddd-2cff-4fe8-abcf-414110faa17d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:53:03 np0005480824 podman[284945]: 2025-10-11 03:53:03.56091462 +0000 UTC m=+0.071178722 container remove de26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:53:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:03.568 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[949d0e8e-e14a-4a8f-aa92-160d9cbf2319]: (4, ('Sat Oct 11 03:53:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517 (de26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236)\nde26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236\nSat Oct 11 03:53:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517 (de26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236)\nde26c471c591cb0b453e82fd06fc9c662dcb17cb6776047fcef6d6eee51b7236\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:03.569 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4337e9-71d2-4ba2-a08d-38e713e79324]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:03.571 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ac3beb3-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:03 np0005480824 kernel: tap1ac3beb3-e0: left promiscuous mode
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:03.594 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2df7e22c-93dd-4b3b-812a-0a9071a64e55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:03.623 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4455b3-ec03-4c6a-8bde-682532256e2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:03.625 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ed562746-aebf-4175-85c3-27f907b2671c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:03.646 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b57363cd-1f20-479f-8011-1e3a20d390c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422531, 'reachable_time': 33604, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284987, 'error': None, 'target': 'ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:03.648 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ac3beb3-eeb0-47be-b56e-672742cfe517 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:53:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:03.648 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[8b41d212-fc24-4fb4-8cbd-779bc3d947d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:03 np0005480824 podman[284991]: 2025-10-11 03:53:03.725575367 +0000 UTC m=+0.046562324 container create 6db35d50035f0eafbdc839f02c1ec8c400d1dfab75214d5a35ea82d38925b6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:53:03 np0005480824 systemd[1]: Started libpod-conmon-6db35d50035f0eafbdc839f02c1ec8c400d1dfab75214d5a35ea82d38925b6fa.scope.
Oct 10 23:53:03 np0005480824 podman[284991]: 2025-10-11 03:53:03.702500244 +0000 UTC m=+0.023487231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:53:03 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.810 2 INFO nova.virt.libvirt.driver [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Deleting instance files /var/lib/nova/instances/7452e9a5-0e1b-4c0c-816b-57e0ea976747_del#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.811 2 INFO nova.virt.libvirt.driver [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Deletion of /var/lib/nova/instances/7452e9a5-0e1b-4c0c-816b-57e0ea976747_del complete#033[00m
Oct 10 23:53:03 np0005480824 podman[284991]: 2025-10-11 03:53:03.827463069 +0000 UTC m=+0.148450026 container init 6db35d50035f0eafbdc839f02c1ec8c400d1dfab75214d5a35ea82d38925b6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pike, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.836 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154783.8362308, 0f4ead16-8af5-427a-9543-772b0c23733d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.836 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] VM Started (Lifecycle Event)#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.839 2 DEBUG nova.compute.manager [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:53:03 np0005480824 podman[284991]: 2025-10-11 03:53:03.8411425 +0000 UTC m=+0.162129457 container start 6db35d50035f0eafbdc839f02c1ec8c400d1dfab75214d5a35ea82d38925b6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pike, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.844 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:53:03 np0005480824 podman[284991]: 2025-10-11 03:53:03.845524843 +0000 UTC m=+0.166511810 container attach 6db35d50035f0eafbdc839f02c1ec8c400d1dfab75214d5a35ea82d38925b6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 23:53:03 np0005480824 gallant_pike[285007]: 167 167
Oct 10 23:53:03 np0005480824 systemd[1]: libpod-6db35d50035f0eafbdc839f02c1ec8c400d1dfab75214d5a35ea82d38925b6fa.scope: Deactivated successfully.
Oct 10 23:53:03 np0005480824 podman[284991]: 2025-10-11 03:53:03.848688717 +0000 UTC m=+0.169675684 container died 6db35d50035f0eafbdc839f02c1ec8c400d1dfab75214d5a35ea82d38925b6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pike, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.850 2 INFO nova.virt.libvirt.driver [-] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Instance spawned successfully.#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.850 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.875 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:53:03 np0005480824 systemd[1]: run-netns-ovnmeta\x2d1ac3beb3\x2deeb0\x2d47be\x2db56e\x2d672742cfe517.mount: Deactivated successfully.
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.884 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:53:03 np0005480824 podman[284991]: 2025-10-11 03:53:03.886882604 +0000 UTC m=+0.207869561 container remove 6db35d50035f0eafbdc839f02c1ec8c400d1dfab75214d5a35ea82d38925b6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pike, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.888 2 INFO nova.compute.manager [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.889 2 DEBUG oslo.service.loopingcall [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.889 2 DEBUG nova.compute.manager [-] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.889 2 DEBUG nova.network.neutron [-] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.896 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.897 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:53:03 np0005480824 systemd[1]: libpod-conmon-6db35d50035f0eafbdc839f02c1ec8c400d1dfab75214d5a35ea82d38925b6fa.scope: Deactivated successfully.
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.897 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.897 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.898 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.899 2 DEBUG nova.virt.libvirt.driver [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.903 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.904 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154783.8388085, 0f4ead16-8af5-427a-9543-772b0c23733d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.904 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.931 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.935 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154783.8409557, 0f4ead16-8af5-427a-9543-772b0c23733d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.935 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.957 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.961 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.966 2 INFO nova.compute.manager [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Took 3.29 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.966 2 DEBUG nova.compute.manager [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:53:03 np0005480824 nova_compute[260089]: 2025-10-11 03:53:03.979 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:53:04 np0005480824 nova_compute[260089]: 2025-10-11 03:53:04.019 2 INFO nova.compute.manager [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Took 9.34 seconds to build instance.#033[00m
Oct 10 23:53:04 np0005480824 nova_compute[260089]: 2025-10-11 03:53:04.034 2 DEBUG oslo_concurrency.lockutils [None req-aea1a14e-cfca-449d-a750-a37b2e82ff94 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:04 np0005480824 podman[285030]: 2025-10-11 03:53:04.086309526 +0000 UTC m=+0.053269912 container create 7d4ce28845e62b97816ad954f8bf6f43369230347595004a0239140d857a7f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:53:04 np0005480824 systemd[1]: Started libpod-conmon-7d4ce28845e62b97816ad954f8bf6f43369230347595004a0239140d857a7f98.scope.
Oct 10 23:53:04 np0005480824 podman[285030]: 2025-10-11 03:53:04.064827671 +0000 UTC m=+0.031788087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:53:04 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:53:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adff192c9176388756e8d6ca7ba57301e1c13958e87ac74d590aa5c2574eba88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:53:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adff192c9176388756e8d6ca7ba57301e1c13958e87ac74d590aa5c2574eba88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:53:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adff192c9176388756e8d6ca7ba57301e1c13958e87ac74d590aa5c2574eba88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:53:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adff192c9176388756e8d6ca7ba57301e1c13958e87ac74d590aa5c2574eba88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:53:04 np0005480824 podman[285030]: 2025-10-11 03:53:04.209089288 +0000 UTC m=+0.176049694 container init 7d4ce28845e62b97816ad954f8bf6f43369230347595004a0239140d857a7f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cerf, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 10 23:53:04 np0005480824 podman[285030]: 2025-10-11 03:53:04.217985998 +0000 UTC m=+0.184946394 container start 7d4ce28845e62b97816ad954f8bf6f43369230347595004a0239140d857a7f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 10 23:53:04 np0005480824 podman[285030]: 2025-10-11 03:53:04.22149358 +0000 UTC m=+0.188454016 container attach 7d4ce28845e62b97816ad954f8bf6f43369230347595004a0239140d857a7f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cerf, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 23:53:04 np0005480824 nova_compute[260089]: 2025-10-11 03:53:04.484 2 DEBUG nova.compute.manager [req-8dd15036-50e8-4350-8d00-6adf00a4e87a req-229b3fc6-5162-4512-b6e2-dcf30ed33b67 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Received event network-vif-plugged-d24beac6-fe81-4cb4-b500-c4446f3106b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:04 np0005480824 nova_compute[260089]: 2025-10-11 03:53:04.485 2 DEBUG oslo_concurrency.lockutils [req-8dd15036-50e8-4350-8d00-6adf00a4e87a req-229b3fc6-5162-4512-b6e2-dcf30ed33b67 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:04 np0005480824 nova_compute[260089]: 2025-10-11 03:53:04.486 2 DEBUG oslo_concurrency.lockutils [req-8dd15036-50e8-4350-8d00-6adf00a4e87a req-229b3fc6-5162-4512-b6e2-dcf30ed33b67 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:04 np0005480824 nova_compute[260089]: 2025-10-11 03:53:04.486 2 DEBUG oslo_concurrency.lockutils [req-8dd15036-50e8-4350-8d00-6adf00a4e87a req-229b3fc6-5162-4512-b6e2-dcf30ed33b67 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:04 np0005480824 nova_compute[260089]: 2025-10-11 03:53:04.486 2 DEBUG nova.compute.manager [req-8dd15036-50e8-4350-8d00-6adf00a4e87a req-229b3fc6-5162-4512-b6e2-dcf30ed33b67 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] No waiting events found dispatching network-vif-plugged-d24beac6-fe81-4cb4-b500-c4446f3106b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:53:04 np0005480824 nova_compute[260089]: 2025-10-11 03:53:04.487 2 WARNING nova.compute.manager [req-8dd15036-50e8-4350-8d00-6adf00a4e87a req-229b3fc6-5162-4512-b6e2-dcf30ed33b67 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Received unexpected event network-vif-plugged-d24beac6-fe81-4cb4-b500-c4446f3106b3 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:53:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:53:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 392 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 76 KiB/s wr, 27 op/s
Oct 10 23:53:05 np0005480824 serene_cerf[285046]: {
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "osd_id": 0,
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "type": "bluestore"
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:    },
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "osd_id": 1,
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "type": "bluestore"
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:    },
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "osd_id": 2,
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:        "type": "bluestore"
Oct 10 23:53:05 np0005480824 serene_cerf[285046]:    }
Oct 10 23:53:05 np0005480824 serene_cerf[285046]: }
Oct 10 23:53:05 np0005480824 systemd[1]: libpod-7d4ce28845e62b97816ad954f8bf6f43369230347595004a0239140d857a7f98.scope: Deactivated successfully.
Oct 10 23:53:05 np0005480824 podman[285079]: 2025-10-11 03:53:05.259736076 +0000 UTC m=+0.025220233 container died 7d4ce28845e62b97816ad954f8bf6f43369230347595004a0239140d857a7f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 23:53:05 np0005480824 systemd[1]: var-lib-containers-storage-overlay-adff192c9176388756e8d6ca7ba57301e1c13958e87ac74d590aa5c2574eba88-merged.mount: Deactivated successfully.
Oct 10 23:53:05 np0005480824 podman[285079]: 2025-10-11 03:53:05.318058885 +0000 UTC m=+0.083543012 container remove 7d4ce28845e62b97816ad954f8bf6f43369230347595004a0239140d857a7f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:53:05 np0005480824 systemd[1]: libpod-conmon-7d4ce28845e62b97816ad954f8bf6f43369230347595004a0239140d857a7f98.scope: Deactivated successfully.
Oct 10 23:53:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:53:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:53:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:53:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:53:05 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev d8975804-44e5-476f-b3da-82afbe6ba3ec does not exist
Oct 10 23:53:05 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 55da9978-ea02-4c8b-b860-4f9ac28c849d does not exist
Oct 10 23:53:05 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:53:05 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:53:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 392 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 76 KiB/s wr, 27 op/s
Oct 10 23:53:06 np0005480824 nova_compute[260089]: 2025-10-11 03:53:06.834 2 DEBUG nova.network.neutron [-] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:53:06 np0005480824 nova_compute[260089]: 2025-10-11 03:53:06.853 2 INFO nova.compute.manager [-] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Took 2.96 seconds to deallocate network for instance.#033[00m
Oct 10 23:53:06 np0005480824 nova_compute[260089]: 2025-10-11 03:53:06.897 2 DEBUG nova.compute.manager [req-032b4027-80a5-41e2-b68e-1676af1b25f8 req-b3d246a7-6166-470e-ba10-81d17ec46e2c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Received event network-vif-plugged-5af98ddd-2cff-4fe8-abcf-414110faa17d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:06 np0005480824 nova_compute[260089]: 2025-10-11 03:53:06.898 2 DEBUG oslo_concurrency.lockutils [req-032b4027-80a5-41e2-b68e-1676af1b25f8 req-b3d246a7-6166-470e-ba10-81d17ec46e2c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:06 np0005480824 nova_compute[260089]: 2025-10-11 03:53:06.898 2 DEBUG oslo_concurrency.lockutils [req-032b4027-80a5-41e2-b68e-1676af1b25f8 req-b3d246a7-6166-470e-ba10-81d17ec46e2c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:06 np0005480824 nova_compute[260089]: 2025-10-11 03:53:06.899 2 DEBUG oslo_concurrency.lockutils [req-032b4027-80a5-41e2-b68e-1676af1b25f8 req-b3d246a7-6166-470e-ba10-81d17ec46e2c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:06 np0005480824 nova_compute[260089]: 2025-10-11 03:53:06.899 2 DEBUG nova.compute.manager [req-032b4027-80a5-41e2-b68e-1676af1b25f8 req-b3d246a7-6166-470e-ba10-81d17ec46e2c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] No waiting events found dispatching network-vif-plugged-5af98ddd-2cff-4fe8-abcf-414110faa17d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:53:06 np0005480824 nova_compute[260089]: 2025-10-11 03:53:06.899 2 WARNING nova.compute.manager [req-032b4027-80a5-41e2-b68e-1676af1b25f8 req-b3d246a7-6166-470e-ba10-81d17ec46e2c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Received unexpected event network-vif-plugged-5af98ddd-2cff-4fe8-abcf-414110faa17d for instance with vm_state active and task_state deleting.#033[00m
Oct 10 23:53:06 np0005480824 nova_compute[260089]: 2025-10-11 03:53:06.902 2 DEBUG oslo_concurrency.lockutils [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:06 np0005480824 nova_compute[260089]: 2025-10-11 03:53:06.902 2 DEBUG oslo_concurrency.lockutils [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.001 2 DEBUG oslo_concurrency.processutils [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:53:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1916889227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.479 2 DEBUG oslo_concurrency.processutils [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.490 2 DEBUG nova.compute.provider_tree [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.510 2 DEBUG nova.scheduler.client.report [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.528 2 DEBUG oslo_concurrency.lockutils [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.551 2 INFO nova.scheduler.client.report [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Deleted allocations for instance 7452e9a5-0e1b-4c0c-816b-57e0ea976747#033[00m
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.602 2 DEBUG oslo_concurrency.lockutils [None req-9aed83fc-b01f-49ca-bd61-2841e103133c cde6845b6b8d482b95a72e38b1db93d3 a65cd418eaad4366991b123d6535a576 - - default default] Lock "7452e9a5-0e1b-4c0c-816b-57e0ea976747" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.653 2 DEBUG oslo_concurrency.lockutils [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "0f4ead16-8af5-427a-9543-772b0c23733d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.654 2 DEBUG oslo_concurrency.lockutils [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.654 2 DEBUG oslo_concurrency.lockutils [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.654 2 DEBUG oslo_concurrency.lockutils [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.655 2 DEBUG oslo_concurrency.lockutils [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.656 2 INFO nova.compute.manager [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Terminating instance#033[00m
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.657 2 DEBUG nova.compute.manager [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:53:07 np0005480824 kernel: tapd24beac6-fe (unregistering): left promiscuous mode
Oct 10 23:53:07 np0005480824 NetworkManager[44969]: <info>  [1760154787.7381] device (tapd24beac6-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:53:07 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:07Z|00128|binding|INFO|Releasing lport d24beac6-fe81-4cb4-b500-c4446f3106b3 from this chassis (sb_readonly=0)
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:07 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:07Z|00129|binding|INFO|Setting lport d24beac6-fe81-4cb4-b500-c4446f3106b3 down in Southbound
Oct 10 23:53:07 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:07Z|00130|binding|INFO|Removing iface tapd24beac6-fe ovn-installed in OVS
Oct 10 23:53:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:07.756 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:e2:d3 10.100.0.9'], port_security=['fa:16:3e:26:e2:d3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0f4ead16-8af5-427a-9543-772b0c23733d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d21391a321476eb133317b3402b0f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fbe6632-cce1-48fb-95c1-bed1096fc071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d98e64fb-092d-4777-b741-426f3e849bc3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=d24beac6-fe81-4cb4-b500-c4446f3106b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:53:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:07.757 162245 INFO neutron.agent.ovn.metadata.agent [-] Port d24beac6-fe81-4cb4-b500-c4446f3106b3 in datapath 359720eb-a957-4bcd-b9b2-3cf7dad947e4 unbound from our chassis#033[00m
Oct 10 23:53:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:07.758 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 359720eb-a957-4bcd-b9b2-3cf7dad947e4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:53:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:07.760 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[59d1274e-b9d5-4644-b528-4f569cfd8349]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:07 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:07.761 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 namespace which is not needed anymore#033[00m
Oct 10 23:53:07 np0005480824 nova_compute[260089]: 2025-10-11 03:53:07.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:07 np0005480824 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct 10 23:53:07 np0005480824 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 5.258s CPU time.
Oct 10 23:53:07 np0005480824 systemd-machined[215071]: Machine qemu-14-instance-0000000e terminated.
Oct 10 23:53:07 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[284698]: [NOTICE]   (284751) : haproxy version is 2.8.14-c23fe91
Oct 10 23:53:07 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[284698]: [NOTICE]   (284751) : path to executable is /usr/sbin/haproxy
Oct 10 23:53:07 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[284698]: [WARNING]  (284751) : Exiting Master process...
Oct 10 23:53:07 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[284698]: [ALERT]    (284751) : Current worker (284761) exited with code 143 (Terminated)
Oct 10 23:53:07 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[284698]: [WARNING]  (284751) : All workers exited. Exiting... (0)
Oct 10 23:53:07 np0005480824 systemd[1]: libpod-a41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42.scope: Deactivated successfully.
Oct 10 23:53:07 np0005480824 podman[285189]: 2025-10-11 03:53:07.977635155 +0000 UTC m=+0.040242826 container stop a41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:53:08 np0005480824 podman[285189]: 2025-10-11 03:53:08.007993757 +0000 UTC m=+0.070601428 container died a41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:53:08 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42-userdata-shm.mount: Deactivated successfully.
Oct 10 23:53:08 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3d3a61827ebc1ca38ceee25acec05cd4c3afe8399c17c5b441a6af05da656c40-merged.mount: Deactivated successfully.
Oct 10 23:53:08 np0005480824 podman[285189]: 2025-10-11 03:53:08.048180711 +0000 UTC m=+0.110788382 container cleanup a41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:53:08 np0005480824 systemd[1]: libpod-conmon-a41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42.scope: Deactivated successfully.
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.103 2 INFO nova.virt.libvirt.driver [-] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Instance destroyed successfully.#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.104 2 DEBUG nova.objects.instance [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lazy-loading 'resources' on Instance uuid 0f4ead16-8af5-427a-9543-772b0c23733d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.120 2 DEBUG nova.virt.libvirt.vif [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:52:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-885663709',display_name='tempest-TestVolumeBootPattern-server-885663709',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-885663709',id=14,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:53:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-84ihe9eh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:53:04Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=0f4ead16-8af5-427a-9543-772b0c23733d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "address": "fa:16:3e:26:e2:d3", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd24beac6-fe", "ovs_interfaceid": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.121 2 DEBUG nova.network.os_vif_util [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "address": "fa:16:3e:26:e2:d3", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd24beac6-fe", "ovs_interfaceid": "d24beac6-fe81-4cb4-b500-c4446f3106b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.121 2 DEBUG nova.network.os_vif_util [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:e2:d3,bridge_name='br-int',has_traffic_filtering=True,id=d24beac6-fe81-4cb4-b500-c4446f3106b3,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd24beac6-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.122 2 DEBUG os_vif [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:e2:d3,bridge_name='br-int',has_traffic_filtering=True,id=d24beac6-fe81-4cb4-b500-c4446f3106b3,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd24beac6-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.126 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd24beac6-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:08 np0005480824 podman[285218]: 2025-10-11 03:53:08.131667802 +0000 UTC m=+0.056416726 container remove a41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.133 2 INFO os_vif [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:e2:d3,bridge_name='br-int',has_traffic_filtering=True,id=d24beac6-fe81-4cb4-b500-c4446f3106b3,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd24beac6-fe')#033[00m
Oct 10 23:53:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:08.143 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[3e372bed-61af-44c6-9fcb-1cc2d77f3f68]: (4, ('Sat Oct 11 03:53:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 (a41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42)\na41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42\nSat Oct 11 03:53:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 (a41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42)\na41a772cab82d77256f159f2214cc5986986a8ed8f4af16d20bd49d9024f0b42\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:08.145 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[71dc6186-8a3e-445f-80a9-5ef491fd0dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:08.146 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359720eb-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:08 np0005480824 kernel: tap359720eb-a0: left promiscuous mode
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:08.172 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[16b51bca-f2e7-4d1a-809c-f7384d9ddac9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.176 2 DEBUG nova.compute.manager [req-d927e449-a2b5-4664-83cc-d519f09c47f9 req-0a659731-7277-4979-999b-bce129dca016 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Received event network-vif-unplugged-d24beac6-fe81-4cb4-b500-c4446f3106b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.177 2 DEBUG oslo_concurrency.lockutils [req-d927e449-a2b5-4664-83cc-d519f09c47f9 req-0a659731-7277-4979-999b-bce129dca016 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.177 2 DEBUG oslo_concurrency.lockutils [req-d927e449-a2b5-4664-83cc-d519f09c47f9 req-0a659731-7277-4979-999b-bce129dca016 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.178 2 DEBUG oslo_concurrency.lockutils [req-d927e449-a2b5-4664-83cc-d519f09c47f9 req-0a659731-7277-4979-999b-bce129dca016 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.178 2 DEBUG nova.compute.manager [req-d927e449-a2b5-4664-83cc-d519f09c47f9 req-0a659731-7277-4979-999b-bce129dca016 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] No waiting events found dispatching network-vif-unplugged-d24beac6-fe81-4cb4-b500-c4446f3106b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.178 2 DEBUG nova.compute.manager [req-d927e449-a2b5-4664-83cc-d519f09c47f9 req-0a659731-7277-4979-999b-bce129dca016 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Received event network-vif-unplugged-d24beac6-fe81-4cb4-b500-c4446f3106b3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:53:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:08.197 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[33a1cd33-5a01-4eb6-9609-bce0700d9f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:08.199 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2f255259-bbc2-4865-a8b9-c441e4c9eb69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:08.218 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[5dce8745-0ef2-4f54-a523-8756d460187e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424994, 'reachable_time': 25217, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285262, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:08 np0005480824 systemd[1]: run-netns-ovnmeta\x2d359720eb\x2da957\x2d4bcd\x2db9b2\x2d3cf7dad947e4.mount: Deactivated successfully.
Oct 10 23:53:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:08.223 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:53:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:08.223 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[9875533a-13d9-41b9-8975-5f0a6f7fe217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.332 2 INFO nova.virt.libvirt.driver [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Deleting instance files /var/lib/nova/instances/0f4ead16-8af5-427a-9543-772b0c23733d_del#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.333 2 INFO nova.virt.libvirt.driver [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Deletion of /var/lib/nova/instances/0f4ead16-8af5-427a-9543-772b0c23733d_del complete#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.388 2 INFO nova.compute.manager [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.389 2 DEBUG oslo.service.loopingcall [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.389 2 DEBUG nova.compute.manager [-] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.390 2 DEBUG nova.network.neutron [-] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 312 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 79 KiB/s wr, 129 op/s
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.844 2 DEBUG nova.network.neutron [-] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.872 2 INFO nova.compute.manager [-] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Took 0.48 seconds to deallocate network for instance.#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.972 2 DEBUG nova.compute.manager [req-68534ad1-d635-4b76-bd93-0a33ad80bb98 req-ea34e50a-40f1-40eb-bfdc-9a5149c4cf96 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Received event network-vif-deleted-5af98ddd-2cff-4fe8-abcf-414110faa17d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:08 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.973 2 DEBUG nova.compute.manager [req-68534ad1-d635-4b76-bd93-0a33ad80bb98 req-ea34e50a-40f1-40eb-bfdc-9a5149c4cf96 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Received event network-vif-deleted-d24beac6-fe81-4cb4-b500-c4446f3106b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:09 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.998 2 INFO nova.compute.manager [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Took 0.13 seconds to detach 1 volumes for instance.#033[00m
Oct 10 23:53:09 np0005480824 nova_compute[260089]: 2025-10-11 03:53:08.999 2 DEBUG nova.compute.manager [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Deleting volume: 433a45af-bba3-48ac-ab26-868daf44aba6 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct 10 23:53:09 np0005480824 podman[285264]: 2025-10-11 03:53:09.033746133 +0000 UTC m=+0.084784861 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:53:09 np0005480824 nova_compute[260089]: 2025-10-11 03:53:09.152 2 DEBUG oslo_concurrency.lockutils [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:09 np0005480824 nova_compute[260089]: 2025-10-11 03:53:09.153 2 DEBUG oslo_concurrency.lockutils [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:09 np0005480824 nova_compute[260089]: 2025-10-11 03:53:09.229 2 DEBUG oslo_concurrency.processutils [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:53:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/715146631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:53:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:53:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/715146631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:53:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:53:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/646670608' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:53:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:53:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/646670608' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:53:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:53:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1928564845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:53:09 np0005480824 nova_compute[260089]: 2025-10-11 03:53:09.657 2 DEBUG oslo_concurrency.processutils [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:09 np0005480824 nova_compute[260089]: 2025-10-11 03:53:09.666 2 DEBUG nova.compute.provider_tree [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:53:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:53:09 np0005480824 nova_compute[260089]: 2025-10-11 03:53:09.687 2 DEBUG nova.scheduler.client.report [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:53:09 np0005480824 nova_compute[260089]: 2025-10-11 03:53:09.713 2 DEBUG oslo_concurrency.lockutils [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:09 np0005480824 nova_compute[260089]: 2025-10-11 03:53:09.737 2 INFO nova.scheduler.client.report [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Deleted allocations for instance 0f4ead16-8af5-427a-9543-772b0c23733d#033[00m
Oct 10 23:53:09 np0005480824 nova_compute[260089]: 2025-10-11 03:53:09.899 2 DEBUG oslo_concurrency.lockutils [None req-7088bc15-e778-4a70-8b45-7c6a1563b262 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:10 np0005480824 nova_compute[260089]: 2025-10-11 03:53:10.250 2 DEBUG nova.compute.manager [req-3e2d3337-16c0-457f-8eea-082dcdd6e3c9 req-2007b7e8-7728-4985-82f4-1cd0a8825262 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Received event network-vif-plugged-d24beac6-fe81-4cb4-b500-c4446f3106b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:10 np0005480824 nova_compute[260089]: 2025-10-11 03:53:10.250 2 DEBUG oslo_concurrency.lockutils [req-3e2d3337-16c0-457f-8eea-082dcdd6e3c9 req-2007b7e8-7728-4985-82f4-1cd0a8825262 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:10 np0005480824 nova_compute[260089]: 2025-10-11 03:53:10.250 2 DEBUG oslo_concurrency.lockutils [req-3e2d3337-16c0-457f-8eea-082dcdd6e3c9 req-2007b7e8-7728-4985-82f4-1cd0a8825262 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:10 np0005480824 nova_compute[260089]: 2025-10-11 03:53:10.250 2 DEBUG oslo_concurrency.lockutils [req-3e2d3337-16c0-457f-8eea-082dcdd6e3c9 req-2007b7e8-7728-4985-82f4-1cd0a8825262 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0f4ead16-8af5-427a-9543-772b0c23733d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:10 np0005480824 nova_compute[260089]: 2025-10-11 03:53:10.251 2 DEBUG nova.compute.manager [req-3e2d3337-16c0-457f-8eea-082dcdd6e3c9 req-2007b7e8-7728-4985-82f4-1cd0a8825262 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] No waiting events found dispatching network-vif-plugged-d24beac6-fe81-4cb4-b500-c4446f3106b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:53:10 np0005480824 nova_compute[260089]: 2025-10-11 03:53:10.251 2 WARNING nova.compute.manager [req-3e2d3337-16c0-457f-8eea-082dcdd6e3c9 req-2007b7e8-7728-4985-82f4-1cd0a8825262 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Received unexpected event network-vif-plugged-d24beac6-fe81-4cb4-b500-c4446f3106b3 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 23:53:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:10.498 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:10.499 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:10.499 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 312 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 115 op/s
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.822657) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154791822703, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1193, "num_deletes": 260, "total_data_size": 1599460, "memory_usage": 1623552, "flush_reason": "Manual Compaction"}
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154791834311, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1558734, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25066, "largest_seqno": 26258, "table_properties": {"data_size": 1552982, "index_size": 3083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13186, "raw_average_key_size": 20, "raw_value_size": 1541078, "raw_average_value_size": 2423, "num_data_blocks": 138, "num_entries": 636, "num_filter_entries": 636, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154710, "oldest_key_time": 1760154710, "file_creation_time": 1760154791, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 12078 microseconds, and 6607 cpu microseconds.
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.834736) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1558734 bytes OK
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.834865) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.836824) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.836841) EVENT_LOG_v1 {"time_micros": 1760154791836835, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.836863) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1593825, prev total WAL file size 1593825, number of live WAL files 2.
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.838429) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1522KB)], [56(10MB)]
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154791838519, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12260924, "oldest_snapshot_seqno": -1}
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5355 keys, 10589012 bytes, temperature: kUnknown
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154791911188, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10589012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10547151, "index_size": 27351, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 133244, "raw_average_key_size": 24, "raw_value_size": 10444674, "raw_average_value_size": 1950, "num_data_blocks": 1129, "num_entries": 5355, "num_filter_entries": 5355, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760154791, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.911667) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10589012 bytes
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.916444) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.4 rd, 145.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 10.2 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(14.7) write-amplify(6.8) OK, records in: 5887, records dropped: 532 output_compression: NoCompression
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.916484) EVENT_LOG_v1 {"time_micros": 1760154791916467, "job": 30, "event": "compaction_finished", "compaction_time_micros": 72796, "compaction_time_cpu_micros": 41931, "output_level": 6, "num_output_files": 1, "total_output_size": 10589012, "num_input_records": 5887, "num_output_records": 5355, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154791917131, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154791920143, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.838241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.920272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.920281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.920283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.920285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:53:11 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:53:11.920288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:53:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:53:12 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2322202107' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:53:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:53:12 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2322202107' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:53:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 312 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.3 KiB/s wr, 185 op/s
Oct 10 23:53:12 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:12Z|00131|binding|INFO|Releasing lport fd35b05a-29b5-4478-aa1a-5883664f9c48 from this chassis (sb_readonly=0)
Oct 10 23:53:12 np0005480824 nova_compute[260089]: 2025-10-11 03:53:12.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:13 np0005480824 nova_compute[260089]: 2025-10-11 03:53:13.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:13 np0005480824 nova_compute[260089]: 2025-10-11 03:53:13.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.237 2 DEBUG oslo_concurrency.lockutils [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "3b8741f5-afdc-4745-b74c-2578bc643be4" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.238 2 DEBUG oslo_concurrency.lockutils [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.257 2 DEBUG nova.objects.instance [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lazy-loading 'flavor' on Instance uuid 3b8741f5-afdc-4745-b74c-2578bc643be4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.345 2 DEBUG oslo_concurrency.lockutils [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:53:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 312 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.3 KiB/s wr, 185 op/s
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.737 2 DEBUG oslo_concurrency.lockutils [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "3b8741f5-afdc-4745-b74c-2578bc643be4" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.738 2 DEBUG oslo_concurrency.lockutils [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.738 2 INFO nova.compute.manager [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Attaching volume 93867b12-31ce-42dc-b29e-58f1f73e6a31 to /dev/vdb#033[00m
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.963 2 DEBUG os_brick.utils [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.965 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.977 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.978 676 DEBUG oslo.privsep.daemon [-] privsep: reply[c69064b1-a7f5-4ffa-8d58-987c586762d8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.980 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.994 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.994 676 DEBUG oslo.privsep.daemon [-] privsep: reply[38c69573-66b2-4354-ba47-1c49eb284065]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:14 np0005480824 nova_compute[260089]: 2025-10-11 03:53:14.996 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.010 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.011 676 DEBUG oslo.privsep.daemon [-] privsep: reply[2772acfe-7bb6-48ca-9ae3-fa4cfbd567c8]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.012 676 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2caf3a-ca81-4ef9-8bfb-54bcaf25206c]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.013 2 DEBUG oslo_concurrency.processutils [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.046 2 DEBUG oslo_concurrency.processutils [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.050 2 DEBUG os_brick.initiator.connectors.lightos [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.050 2 DEBUG os_brick.initiator.connectors.lightos [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.050 2 DEBUG os_brick.initiator.connectors.lightos [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.051 2 DEBUG os_brick.utils [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] <== get_connector_properties: return (86ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.051 2 DEBUG nova.virt.block_device [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Updating existing volume attachment record: c77aefb2-cd0a-4f22-b85d-29c529217e15 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:53:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:53:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1627376138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.780 2 DEBUG nova.objects.instance [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lazy-loading 'flavor' on Instance uuid 3b8741f5-afdc-4745-b74c-2578bc643be4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.811 2 DEBUG nova.virt.libvirt.driver [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Attempting to attach volume 93867b12-31ce-42dc-b29e-58f1f73e6a31 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.814 2 DEBUG nova.virt.libvirt.guest [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] attach device xml: <disk type="network" device="disk">
Oct 10 23:53:15 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:53:15 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-93867b12-31ce-42dc-b29e-58f1f73e6a31">
Oct 10 23:53:15 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:53:15 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:53:15 np0005480824 nova_compute[260089]:  <auth username="openstack">
Oct 10 23:53:15 np0005480824 nova_compute[260089]:    <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:53:15 np0005480824 nova_compute[260089]:  </auth>
Oct 10 23:53:15 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:53:15 np0005480824 nova_compute[260089]:  <serial>93867b12-31ce-42dc-b29e-58f1f73e6a31</serial>
Oct 10 23:53:15 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:53:15 np0005480824 nova_compute[260089]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.952 2 DEBUG nova.virt.libvirt.driver [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.953 2 DEBUG nova.virt.libvirt.driver [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.954 2 DEBUG nova.virt.libvirt.driver [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:53:15 np0005480824 nova_compute[260089]: 2025-10-11 03:53:15.955 2 DEBUG nova.virt.libvirt.driver [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] No VIF found with MAC fa:16:3e:0d:51:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:53:16 np0005480824 nova_compute[260089]: 2025-10-11 03:53:16.176 2 DEBUG oslo_concurrency.lockutils [None req-5ce9438d-c9ed-4999-8599-832423990a25 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:16 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:16Z|00132|binding|INFO|Releasing lport fd35b05a-29b5-4478-aa1a-5883664f9c48 from this chassis (sb_readonly=0)
Oct 10 23:53:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 312 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.3 KiB/s wr, 185 op/s
Oct 10 23:53:16 np0005480824 nova_compute[260089]: 2025-10-11 03:53:16.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:18 np0005480824 nova_compute[260089]: 2025-10-11 03:53:18.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:18 np0005480824 nova_compute[260089]: 2025-10-11 03:53:18.269 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154783.268971, 7452e9a5-0e1b-4c0c-816b-57e0ea976747 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:53:18 np0005480824 nova_compute[260089]: 2025-10-11 03:53:18.270 2 INFO nova.compute.manager [-] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:53:18 np0005480824 nova_compute[260089]: 2025-10-11 03:53:18.289 2 DEBUG nova.compute.manager [None req-d6144048-20d4-4c9b-abc1-b6406981d05b - - - - - -] [instance: 7452e9a5-0e1b-4c0c-816b-57e0ea976747] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:53:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 266 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.2 KiB/s wr, 97 op/s
Oct 10 23:53:18 np0005480824 nova_compute[260089]: 2025-10-11 03:53:18.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:18 np0005480824 nova_compute[260089]: 2025-10-11 03:53:18.858 2 DEBUG oslo_concurrency.lockutils [None req-35ce38dc-2287-477d-aee1-4e99386e104a d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "3b8741f5-afdc-4745-b74c-2578bc643be4" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:18 np0005480824 nova_compute[260089]: 2025-10-11 03:53:18.859 2 DEBUG oslo_concurrency.lockutils [None req-35ce38dc-2287-477d-aee1-4e99386e104a d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:18 np0005480824 nova_compute[260089]: 2025-10-11 03:53:18.875 2 INFO nova.compute.manager [None req-35ce38dc-2287-477d-aee1-4e99386e104a d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Detaching volume 93867b12-31ce-42dc-b29e-58f1f73e6a31#033[00m
Oct 10 23:53:19 np0005480824 nova_compute[260089]: 2025-10-11 03:53:19.109 2 INFO nova.virt.block_device [None req-35ce38dc-2287-477d-aee1-4e99386e104a d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Attempting to driver detach volume 93867b12-31ce-42dc-b29e-58f1f73e6a31 from mountpoint /dev/vdb#033[00m
Oct 10 23:53:19 np0005480824 nova_compute[260089]: 2025-10-11 03:53:19.118 2 DEBUG nova.virt.libvirt.driver [None req-35ce38dc-2287-477d-aee1-4e99386e104a d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Attempting to detach device vdb from instance 3b8741f5-afdc-4745-b74c-2578bc643be4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 10 23:53:19 np0005480824 nova_compute[260089]: 2025-10-11 03:53:19.119 2 DEBUG nova.virt.libvirt.guest [None req-35ce38dc-2287-477d-aee1-4e99386e104a d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:53:19 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:53:19 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-93867b12-31ce-42dc-b29e-58f1f73e6a31">
Oct 10 23:53:19 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:53:19 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:53:19 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:53:19 np0005480824 nova_compute[260089]:  <serial>93867b12-31ce-42dc-b29e-58f1f73e6a31</serial>
Oct 10 23:53:19 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:53:19 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:53:19 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:53:19 np0005480824 nova_compute[260089]: 2025-10-11 03:53:19.125 2 INFO nova.virt.libvirt.driver [None req-35ce38dc-2287-477d-aee1-4e99386e104a d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Successfully detached device vdb from instance 3b8741f5-afdc-4745-b74c-2578bc643be4 from the persistent domain config.#033[00m
Oct 10 23:53:19 np0005480824 nova_compute[260089]: 2025-10-11 03:53:19.125 2 DEBUG nova.virt.libvirt.driver [None req-35ce38dc-2287-477d-aee1-4e99386e104a d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3b8741f5-afdc-4745-b74c-2578bc643be4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 10 23:53:19 np0005480824 nova_compute[260089]: 2025-10-11 03:53:19.125 2 DEBUG nova.virt.libvirt.guest [None req-35ce38dc-2287-477d-aee1-4e99386e104a d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:53:19 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:53:19 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-93867b12-31ce-42dc-b29e-58f1f73e6a31">
Oct 10 23:53:19 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:53:19 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:53:19 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:53:19 np0005480824 nova_compute[260089]:  <serial>93867b12-31ce-42dc-b29e-58f1f73e6a31</serial>
Oct 10 23:53:19 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:53:19 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:53:19 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:53:19 np0005480824 nova_compute[260089]: 2025-10-11 03:53:19.242 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Received event <DeviceRemovedEvent: 1760154799.2413685, 3b8741f5-afdc-4745-b74c-2578bc643be4 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 10 23:53:19 np0005480824 nova_compute[260089]: 2025-10-11 03:53:19.244 2 DEBUG nova.virt.libvirt.driver [None req-35ce38dc-2287-477d-aee1-4e99386e104a d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3b8741f5-afdc-4745-b74c-2578bc643be4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 10 23:53:19 np0005480824 nova_compute[260089]: 2025-10-11 03:53:19.247 2 INFO nova.virt.libvirt.driver [None req-35ce38dc-2287-477d-aee1-4e99386e104a d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Successfully detached device vdb from instance 3b8741f5-afdc-4745-b74c-2578bc643be4 from the live domain config.#033[00m
Oct 10 23:53:19 np0005480824 nova_compute[260089]: 2025-10-11 03:53:19.412 2 DEBUG nova.objects.instance [None req-35ce38dc-2287-477d-aee1-4e99386e104a d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lazy-loading 'flavor' on Instance uuid 3b8741f5-afdc-4745-b74c-2578bc643be4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:53:19 np0005480824 nova_compute[260089]: 2025-10-11 03:53:19.450 2 DEBUG oslo_concurrency.lockutils [None req-35ce38dc-2287-477d-aee1-4e99386e104a d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:53:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Oct 10 23:53:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Oct 10 23:53:19 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Oct 10 23:53:20 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 10 23:53:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 266 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 KiB/s wr, 99 op/s
Oct 10 23:53:20 np0005480824 nova_compute[260089]: 2025-10-11 03:53:20.954 2 DEBUG nova.compute.manager [req-1e5f0b4c-1ea6-4ace-a28c-faf73ddd28ca req-ff992560-2b06-4d59-bd78-87b8d191bda5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Received event network-changed-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:20 np0005480824 nova_compute[260089]: 2025-10-11 03:53:20.955 2 DEBUG nova.compute.manager [req-1e5f0b4c-1ea6-4ace-a28c-faf73ddd28ca req-ff992560-2b06-4d59-bd78-87b8d191bda5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Refreshing instance network info cache due to event network-changed-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:53:20 np0005480824 nova_compute[260089]: 2025-10-11 03:53:20.955 2 DEBUG oslo_concurrency.lockutils [req-1e5f0b4c-1ea6-4ace-a28c-faf73ddd28ca req-ff992560-2b06-4d59-bd78-87b8d191bda5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-3b8741f5-afdc-4745-b74c-2578bc643be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:53:20 np0005480824 nova_compute[260089]: 2025-10-11 03:53:20.955 2 DEBUG oslo_concurrency.lockutils [req-1e5f0b4c-1ea6-4ace-a28c-faf73ddd28ca req-ff992560-2b06-4d59-bd78-87b8d191bda5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-3b8741f5-afdc-4745-b74c-2578bc643be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:53:20 np0005480824 nova_compute[260089]: 2025-10-11 03:53:20.955 2 DEBUG nova.network.neutron [req-1e5f0b4c-1ea6-4ace-a28c-faf73ddd28ca req-ff992560-2b06-4d59-bd78-87b8d191bda5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Refreshing network info cache for port 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:53:20 np0005480824 nova_compute[260089]: 2025-10-11 03:53:20.956 2 DEBUG oslo_concurrency.lockutils [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "3b8741f5-afdc-4745-b74c-2578bc643be4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:20 np0005480824 nova_compute[260089]: 2025-10-11 03:53:20.957 2 DEBUG oslo_concurrency.lockutils [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:20 np0005480824 nova_compute[260089]: 2025-10-11 03:53:20.957 2 DEBUG oslo_concurrency.lockutils [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:20 np0005480824 nova_compute[260089]: 2025-10-11 03:53:20.957 2 DEBUG oslo_concurrency.lockutils [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:20 np0005480824 nova_compute[260089]: 2025-10-11 03:53:20.957 2 DEBUG oslo_concurrency.lockutils [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:20 np0005480824 nova_compute[260089]: 2025-10-11 03:53:20.958 2 INFO nova.compute.manager [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Terminating instance#033[00m
Oct 10 23:53:20 np0005480824 nova_compute[260089]: 2025-10-11 03:53:20.959 2 DEBUG nova.compute.manager [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:53:21 np0005480824 kernel: tap7ef1c20b-95 (unregistering): left promiscuous mode
Oct 10 23:53:21 np0005480824 NetworkManager[44969]: <info>  [1760154801.0358] device (tap7ef1c20b-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:53:21 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:21Z|00133|binding|INFO|Releasing lport 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 from this chassis (sb_readonly=0)
Oct 10 23:53:21 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:21Z|00134|binding|INFO|Setting lport 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 down in Southbound
Oct 10 23:53:21 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:21Z|00135|binding|INFO|Removing iface tap7ef1c20b-95 ovn-installed in OVS
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:21.067 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:51:d8 10.100.0.5'], port_security=['fa:16:3e:0d:51:d8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3b8741f5-afdc-4745-b74c-2578bc643be4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '944395b4a11c4a9182fda518dc7bd2d8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e77eea50-c642-4f6c-8fc0-1335adf52ced', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9edb3820-196e-493d-adad-15b8aa8d51aa, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:53:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:21.069 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 in datapath f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e unbound from our chassis#033[00m
Oct 10 23:53:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:21.071 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:21 np0005480824 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 10 23:53:21 np0005480824 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 17.162s CPU time.
Oct 10 23:53:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:21.098 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d7f5d1-2ab7-44b2-a753-e9b556670ff0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:21 np0005480824 systemd-machined[215071]: Machine qemu-12-instance-0000000c terminated.
Oct 10 23:53:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:21.141 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f05929-54d2-489c-99da-a9fe3e294450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:21.145 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[1d528733-cd06-4cf2-b868-3ceedc56a3ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:21.187 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[7158eeb0-90bc-4eb7-9dfd-b07ee4adf18f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.207 2 INFO nova.virt.libvirt.driver [-] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Instance destroyed successfully.#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.208 2 DEBUG nova.objects.instance [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lazy-loading 'resources' on Instance uuid 3b8741f5-afdc-4745-b74c-2578bc643be4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:53:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:21.216 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[464c9096-1b11-44cb-b5a4-43993155b1b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0e7e6a7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:23:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416214, 'reachable_time': 37142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285352, 'error': None, 'target': 'ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.225 2 DEBUG nova.virt.libvirt.vif [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:52:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-2046188635',display_name='tempest-TestStampPattern-server-2046188635',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-2046188635',id=12,image_ref='bb54f500-8a3d-4161-bee0-566f2411c985',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAimCn5RB/FvLKTbWbetTfaBYWY7YsxfCSNDCqy+0n9wsCRn+L8WUumxgKvSs5fbSkxaZ0JLw7ssb691wNMVrABVHOJ2APu3cO2oHOABFF8LDk8lk3BSAJi4zZFoYj4Rjw==',key_name='tempest-TestStampPattern-1826930411',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:52:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='944395b4a11c4a9182fda518dc7bd2d8',ramdisk_id='',reservation_id='r-6e64hiaf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d22b35e9-badc-40d1-952e-60cdfd60decb',image_min_disk='1',image_min_ram='0',image_owner_id='944395b4a11c4a9182fda518dc7bd2d8',image_owner_project_name='tempest-TestStampPattern-358096571',image_owner_user_name='tempest-TestStampPattern-358096571-project-member',image_user_id='d6596329d9c842b78638fdbcf50b8ec8',owner_project_name='tempest-TestStampPattern-358096571',owner_user_name='tempest-TestStampPattern-358096571-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:52:37Z,user_data=None,user_id='d6596329d9c842b78638fdbcf50b8ec8',uuid=3b8741f5-afdc-4745-b74c-2578bc643be4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "address": "fa:16:3e:0d:51:d8", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ef1c20b-95", "ovs_interfaceid": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.225 2 DEBUG nova.network.os_vif_util [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Converting VIF {"id": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "address": "fa:16:3e:0d:51:d8", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ef1c20b-95", "ovs_interfaceid": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.226 2 DEBUG nova.network.os_vif_util [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:51:d8,bridge_name='br-int',has_traffic_filtering=True,id=7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8,network=Network(f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ef1c20b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.226 2 DEBUG os_vif [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:51:d8,bridge_name='br-int',has_traffic_filtering=True,id=7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8,network=Network(f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ef1c20b-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ef1c20b-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.237 2 INFO os_vif [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:51:d8,bridge_name='br-int',has_traffic_filtering=True,id=7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8,network=Network(f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ef1c20b-95')#033[00m
Oct 10 23:53:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:21.243 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2516d6-2346-4f1f-86f5-3c2a097ac8ef]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf0e7e6a7-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 416226, 'tstamp': 416226}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285357, 'error': None, 'target': 'ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf0e7e6a7-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 416228, 'tstamp': 416228}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285357, 'error': None, 'target': 'ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:21.246 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0e7e6a7-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:21.250 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0e7e6a7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:21.250 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:53:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:21.251 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0e7e6a7-10, col_values=(('external_ids', {'iface-id': 'fd35b05a-29b5-4478-aa1a-5883664f9c48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:21.253 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.416 2 DEBUG nova.compute.manager [req-3e5ad1da-f4e7-4326-a6fd-5bed7852bd51 req-d6419e40-044a-4cf5-b3e6-58edaf7a3479 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Received event network-vif-unplugged-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.416 2 DEBUG oslo_concurrency.lockutils [req-3e5ad1da-f4e7-4326-a6fd-5bed7852bd51 req-d6419e40-044a-4cf5-b3e6-58edaf7a3479 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.417 2 DEBUG oslo_concurrency.lockutils [req-3e5ad1da-f4e7-4326-a6fd-5bed7852bd51 req-d6419e40-044a-4cf5-b3e6-58edaf7a3479 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.417 2 DEBUG oslo_concurrency.lockutils [req-3e5ad1da-f4e7-4326-a6fd-5bed7852bd51 req-d6419e40-044a-4cf5-b3e6-58edaf7a3479 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.417 2 DEBUG nova.compute.manager [req-3e5ad1da-f4e7-4326-a6fd-5bed7852bd51 req-d6419e40-044a-4cf5-b3e6-58edaf7a3479 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] No waiting events found dispatching network-vif-unplugged-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.417 2 DEBUG nova.compute.manager [req-3e5ad1da-f4e7-4326-a6fd-5bed7852bd51 req-d6419e40-044a-4cf5-b3e6-58edaf7a3479 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Received event network-vif-unplugged-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.637 2 INFO nova.virt.libvirt.driver [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Deleting instance files /var/lib/nova/instances/3b8741f5-afdc-4745-b74c-2578bc643be4_del#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.638 2 INFO nova.virt.libvirt.driver [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Deletion of /var/lib/nova/instances/3b8741f5-afdc-4745-b74c-2578bc643be4_del complete#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.702 2 INFO nova.compute.manager [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.703 2 DEBUG oslo.service.loopingcall [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.703 2 DEBUG nova.compute.manager [-] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:53:21 np0005480824 nova_compute[260089]: 2025-10-11 03:53:21.704 2 DEBUG nova.network.neutron [-] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:53:22 np0005480824 nova_compute[260089]: 2025-10-11 03:53:22.374 2 DEBUG nova.network.neutron [req-1e5f0b4c-1ea6-4ace-a28c-faf73ddd28ca req-ff992560-2b06-4d59-bd78-87b8d191bda5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Updated VIF entry in instance network info cache for port 7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:53:22 np0005480824 nova_compute[260089]: 2025-10-11 03:53:22.375 2 DEBUG nova.network.neutron [req-1e5f0b4c-1ea6-4ace-a28c-faf73ddd28ca req-ff992560-2b06-4d59-bd78-87b8d191bda5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Updating instance_info_cache with network_info: [{"id": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "address": "fa:16:3e:0d:51:d8", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ef1c20b-95", "ovs_interfaceid": "7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:53:22 np0005480824 nova_compute[260089]: 2025-10-11 03:53:22.397 2 DEBUG oslo_concurrency.lockutils [req-1e5f0b4c-1ea6-4ace-a28c-faf73ddd28ca req-ff992560-2b06-4d59-bd78-87b8d191bda5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-3b8741f5-afdc-4745-b74c-2578bc643be4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:53:22 np0005480824 nova_compute[260089]: 2025-10-11 03:53:22.410 2 DEBUG nova.network.neutron [-] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:53:22 np0005480824 nova_compute[260089]: 2025-10-11 03:53:22.426 2 INFO nova.compute.manager [-] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Took 0.72 seconds to deallocate network for instance.#033[00m
Oct 10 23:53:22 np0005480824 nova_compute[260089]: 2025-10-11 03:53:22.470 2 DEBUG oslo_concurrency.lockutils [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:22 np0005480824 nova_compute[260089]: 2025-10-11 03:53:22.471 2 DEBUG oslo_concurrency.lockutils [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:22 np0005480824 nova_compute[260089]: 2025-10-11 03:53:22.547 2 DEBUG oslo_concurrency.processutils [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 296 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 121 op/s
Oct 10 23:53:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:53:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3254527220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:53:22 np0005480824 nova_compute[260089]: 2025-10-11 03:53:22.988 2 DEBUG oslo_concurrency.processutils [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:22 np0005480824 nova_compute[260089]: 2025-10-11 03:53:22.998 2 DEBUG nova.compute.provider_tree [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.015 2 DEBUG nova.scheduler.client.report [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.043 2 DEBUG oslo_concurrency.lockutils [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.069 2 INFO nova.scheduler.client.report [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Deleted allocations for instance 3b8741f5-afdc-4745-b74c-2578bc643be4#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.101 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154788.0995026, 0f4ead16-8af5-427a-9543-772b0c23733d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.102 2 INFO nova.compute.manager [-] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.124 2 DEBUG nova.compute.manager [None req-9c15bda7-374b-4464-ac04-e5db8c2cb507 - - - - - -] [instance: 0f4ead16-8af5-427a-9543-772b0c23733d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.147 2 DEBUG oslo_concurrency.lockutils [None req-d14535f3-1386-4516-8dd8-c03fa1d84256 d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.632 2 DEBUG nova.compute.manager [req-747f8097-96c0-4089-90f8-d917a9fb7f45 req-c43548b5-a22e-40f9-a1f0-db875da80641 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Received event network-vif-plugged-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.634 2 DEBUG oslo_concurrency.lockutils [req-747f8097-96c0-4089-90f8-d917a9fb7f45 req-c43548b5-a22e-40f9-a1f0-db875da80641 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.635 2 DEBUG oslo_concurrency.lockutils [req-747f8097-96c0-4089-90f8-d917a9fb7f45 req-c43548b5-a22e-40f9-a1f0-db875da80641 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.636 2 DEBUG oslo_concurrency.lockutils [req-747f8097-96c0-4089-90f8-d917a9fb7f45 req-c43548b5-a22e-40f9-a1f0-db875da80641 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3b8741f5-afdc-4745-b74c-2578bc643be4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.636 2 DEBUG nova.compute.manager [req-747f8097-96c0-4089-90f8-d917a9fb7f45 req-c43548b5-a22e-40f9-a1f0-db875da80641 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] No waiting events found dispatching network-vif-plugged-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.637 2 WARNING nova.compute.manager [req-747f8097-96c0-4089-90f8-d917a9fb7f45 req-c43548b5-a22e-40f9-a1f0-db875da80641 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Received unexpected event network-vif-plugged-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.637 2 DEBUG nova.compute.manager [req-747f8097-96c0-4089-90f8-d917a9fb7f45 req-c43548b5-a22e-40f9-a1f0-db875da80641 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Received event network-vif-deleted-7ef1c20b-9548-4ca9-9c3e-fc006ba2f7e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:23 np0005480824 nova_compute[260089]: 2025-10-11 03:53:23.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:53:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2431521888' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:53:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:53:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2431521888' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:53:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:53:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3333506909' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:53:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:53:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3333506909' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:53:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:53:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 296 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 121 op/s
Oct 10 23:53:25 np0005480824 nova_compute[260089]: 2025-10-11 03:53:25.705 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:25 np0005480824 nova_compute[260089]: 2025-10-11 03:53:25.708 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:25 np0005480824 nova_compute[260089]: 2025-10-11 03:53:25.728 2 DEBUG nova.compute.manager [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:53:25 np0005480824 nova_compute[260089]: 2025-10-11 03:53:25.817 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:25 np0005480824 nova_compute[260089]: 2025-10-11 03:53:25.818 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:25 np0005480824 nova_compute[260089]: 2025-10-11 03:53:25.831 2 DEBUG nova.virt.hardware [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:53:25 np0005480824 nova_compute[260089]: 2025-10-11 03:53:25.832 2 INFO nova.compute.claims [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:53:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Oct 10 23:53:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Oct 10 23:53:25 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Oct 10 23:53:25 np0005480824 nova_compute[260089]: 2025-10-11 03:53:25.945 2 DEBUG oslo_concurrency.processutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:26 np0005480824 podman[285399]: 2025-10-11 03:53:26.070628009 +0000 UTC m=+0.105837827 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:53:26 np0005480824 podman[285400]: 2025-10-11 03:53:26.077585812 +0000 UTC m=+0.108498198 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:53:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3603998516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.418 2 DEBUG oslo_concurrency.processutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.426 2 DEBUG nova.compute.provider_tree [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.443 2 DEBUG nova.scheduler.client.report [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.469 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.470 2 DEBUG nova.compute.manager [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.526 2 DEBUG nova.compute.manager [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.526 2 DEBUG nova.network.neutron [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.545 2 INFO nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.565 2 DEBUG nova.compute.manager [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.604 2 INFO nova.virt.block_device [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Booting with volume be5dc6c3-9ee3-45f1-9e6c-1fecc35321b7 at /dev/vda#033[00m
Oct 10 23:53:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 296 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 913 KiB/s rd, 3.2 MiB/s wr, 109 op/s
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.729 2 DEBUG os_brick.utils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.731 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.744 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.744 676 DEBUG oslo.privsep.daemon [-] privsep: reply[dc581276-0458-4a89-a805-a2ce3335bad2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.745 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.759 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.760 676 DEBUG oslo.privsep.daemon [-] privsep: reply[c0519ad0-adbc-435f-a65d-a4a6b84f8557]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.762 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.778 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.778 676 DEBUG oslo.privsep.daemon [-] privsep: reply[af319c60-8337-4e18-aa38-131acb78708c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.780 676 DEBUG oslo.privsep.daemon [-] privsep: reply[a37cdf63-2301-4c43-908b-cdfc486bd916]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.781 2 DEBUG oslo_concurrency.processutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.816 2 DEBUG oslo_concurrency.processutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.820 2 DEBUG os_brick.initiator.connectors.lightos [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.821 2 DEBUG os_brick.initiator.connectors.lightos [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.821 2 DEBUG os_brick.initiator.connectors.lightos [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.821 2 DEBUG os_brick.utils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] <== get_connector_properties: return (91ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.822 2 DEBUG nova.virt.block_device [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Updating existing volume attachment record: 4c460dbe-0d3f-4712-9faf-b701e690c834 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:53:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Oct 10 23:53:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Oct 10 23:53:26 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Oct 10 23:53:26 np0005480824 nova_compute[260089]: 2025-10-11 03:53:26.974 2 DEBUG nova.policy [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38ebc503771e417aaf1f3aea0c835994', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55d21391a321476eb133317b3402b0f0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:53:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:53:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1111786364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:53:27 np0005480824 nova_compute[260089]: 2025-10-11 03:53:27.789 2 DEBUG nova.compute.manager [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:53:27 np0005480824 nova_compute[260089]: 2025-10-11 03:53:27.792 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:53:27 np0005480824 nova_compute[260089]: 2025-10-11 03:53:27.793 2 INFO nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Creating image(s)#033[00m
Oct 10 23:53:27 np0005480824 nova_compute[260089]: 2025-10-11 03:53:27.794 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 10 23:53:27 np0005480824 nova_compute[260089]: 2025-10-11 03:53:27.795 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Ensure instance console log exists: /var/lib/nova/instances/ba9c01f8-cb0e-4564-879e-fb3102e2e76a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:53:27 np0005480824 nova_compute[260089]: 2025-10-11 03:53:27.795 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:27 np0005480824 nova_compute[260089]: 2025-10-11 03:53:27.796 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:27 np0005480824 nova_compute[260089]: 2025-10-11 03:53:27.796 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:53:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:53:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:53:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:53:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:53:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:53:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:53:27
Oct 10 23:53:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:53:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:53:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'vms', 'images', 'default.rgw.control', 'volumes', 'backups']
Oct 10 23:53:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:53:27 np0005480824 nova_compute[260089]: 2025-10-11 03:53:27.940 2 DEBUG nova.network.neutron [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Successfully created port: 16c1f566-62ec-4bf8-ae0e-225e1fad3288 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:53:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:53:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:53:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:53:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:53:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:53:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:53:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:53:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:53:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:53:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:53:28 np0005480824 nova_compute[260089]: 2025-10-11 03:53:28.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:53:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 215 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 968 KiB/s rd, 3.2 MiB/s wr, 188 op/s
Oct 10 23:53:28 np0005480824 nova_compute[260089]: 2025-10-11 03:53:28.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:28 np0005480824 nova_compute[260089]: 2025-10-11 03:53:28.904 2 DEBUG nova.compute.manager [req-3b6039e9-bd04-4e46-9c44-3fcaaafe32b6 req-e438c278-43e2-40db-a1f8-5b49bf2e9a16 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Received event network-changed-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:28 np0005480824 nova_compute[260089]: 2025-10-11 03:53:28.905 2 DEBUG nova.compute.manager [req-3b6039e9-bd04-4e46-9c44-3fcaaafe32b6 req-e438c278-43e2-40db-a1f8-5b49bf2e9a16 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Refreshing instance network info cache due to event network-changed-a6d0ac82-b500-4962-8bfd-d36ef3ba2948. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:53:28 np0005480824 nova_compute[260089]: 2025-10-11 03:53:28.905 2 DEBUG oslo_concurrency.lockutils [req-3b6039e9-bd04-4e46-9c44-3fcaaafe32b6 req-e438c278-43e2-40db-a1f8-5b49bf2e9a16 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:53:28 np0005480824 nova_compute[260089]: 2025-10-11 03:53:28.906 2 DEBUG oslo_concurrency.lockutils [req-3b6039e9-bd04-4e46-9c44-3fcaaafe32b6 req-e438c278-43e2-40db-a1f8-5b49bf2e9a16 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:53:28 np0005480824 nova_compute[260089]: 2025-10-11 03:53:28.906 2 DEBUG nova.network.neutron [req-3b6039e9-bd04-4e46-9c44-3fcaaafe32b6 req-e438c278-43e2-40db-a1f8-5b49bf2e9a16 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Refreshing network info cache for port a6d0ac82-b500-4962-8bfd-d36ef3ba2948 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:53:28 np0005480824 nova_compute[260089]: 2025-10-11 03:53:28.937 2 DEBUG oslo_concurrency.lockutils [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "d22b35e9-badc-40d1-952e-60cdfd60decb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:28 np0005480824 nova_compute[260089]: 2025-10-11 03:53:28.938 2 DEBUG oslo_concurrency.lockutils [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:28 np0005480824 nova_compute[260089]: 2025-10-11 03:53:28.939 2 DEBUG oslo_concurrency.lockutils [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:28 np0005480824 nova_compute[260089]: 2025-10-11 03:53:28.939 2 DEBUG oslo_concurrency.lockutils [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:28 np0005480824 nova_compute[260089]: 2025-10-11 03:53:28.939 2 DEBUG oslo_concurrency.lockutils [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:28 np0005480824 nova_compute[260089]: 2025-10-11 03:53:28.942 2 INFO nova.compute.manager [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Terminating instance#033[00m
Oct 10 23:53:28 np0005480824 nova_compute[260089]: 2025-10-11 03:53:28.944 2 DEBUG nova.compute.manager [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:53:28 np0005480824 kernel: tapa6d0ac82-b5 (unregistering): left promiscuous mode
Oct 10 23:53:29 np0005480824 NetworkManager[44969]: <info>  [1760154809.0070] device (tapa6d0ac82-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:29 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:29Z|00136|binding|INFO|Releasing lport a6d0ac82-b500-4962-8bfd-d36ef3ba2948 from this chassis (sb_readonly=0)
Oct 10 23:53:29 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:29Z|00137|binding|INFO|Setting lport a6d0ac82-b500-4962-8bfd-d36ef3ba2948 down in Southbound
Oct 10 23:53:29 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:29Z|00138|binding|INFO|Removing iface tapa6d0ac82-b5 ovn-installed in OVS
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:29.030 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:2b:86 10.100.0.11'], port_security=['fa:16:3e:10:2b:86 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd22b35e9-badc-40d1-952e-60cdfd60decb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '944395b4a11c4a9182fda518dc7bd2d8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e77eea50-c642-4f6c-8fc0-1335adf52ced', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9edb3820-196e-493d-adad-15b8aa8d51aa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=a6d0ac82-b500-4962-8bfd-d36ef3ba2948) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:53:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:29.031 162245 INFO neutron.agent.ovn.metadata.agent [-] Port a6d0ac82-b500-4962-8bfd-d36ef3ba2948 in datapath f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e unbound from our chassis#033[00m
Oct 10 23:53:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:29.032 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:53:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:29.033 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ed4666c6-5ae2-414f-99fe-ee0afed936f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:29.033 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e namespace which is not needed anymore#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:29 np0005480824 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 10 23:53:29 np0005480824 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 19.111s CPU time.
Oct 10 23:53:29 np0005480824 systemd-machined[215071]: Machine qemu-10-instance-0000000a terminated.
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.150 2 DEBUG nova.network.neutron [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Successfully updated port: 16c1f566-62ec-4bf8-ae0e-225e1fad3288 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.165 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "refresh_cache-ba9c01f8-cb0e-4564-879e-fb3102e2e76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.166 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquired lock "refresh_cache-ba9c01f8-cb0e-4564-879e-fb3102e2e76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.166 2 DEBUG nova.network.neutron [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:53:29 np0005480824 neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e[278218]: [NOTICE]   (278222) : haproxy version is 2.8.14-c23fe91
Oct 10 23:53:29 np0005480824 neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e[278218]: [NOTICE]   (278222) : path to executable is /usr/sbin/haproxy
Oct 10 23:53:29 np0005480824 neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e[278218]: [WARNING]  (278222) : Exiting Master process...
Oct 10 23:53:29 np0005480824 neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e[278218]: [WARNING]  (278222) : Exiting Master process...
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.197 2 INFO nova.virt.libvirt.driver [-] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Instance destroyed successfully.#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.198 2 DEBUG nova.objects.instance [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lazy-loading 'resources' on Instance uuid d22b35e9-badc-40d1-952e-60cdfd60decb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:53:29 np0005480824 neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e[278218]: [ALERT]    (278222) : Current worker (278224) exited with code 143 (Terminated)
Oct 10 23:53:29 np0005480824 neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e[278218]: [WARNING]  (278222) : All workers exited. Exiting... (0)
Oct 10 23:53:29 np0005480824 systemd[1]: libpod-64ace6697da668597146157833bc20a7df2b1a93ed900f6b4a91dd115a16d1ba.scope: Deactivated successfully.
Oct 10 23:53:29 np0005480824 podman[285493]: 2025-10-11 03:53:29.207724403 +0000 UTC m=+0.054601713 container died 64ace6697da668597146157833bc20a7df2b1a93ed900f6b4a91dd115a16d1ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.213 2 DEBUG nova.virt.libvirt.vif [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:51:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1402368014',display_name='tempest-TestStampPattern-server-1402368014',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1402368014',id=10,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAimCn5RB/FvLKTbWbetTfaBYWY7YsxfCSNDCqy+0n9wsCRn+L8WUumxgKvSs5fbSkxaZ0JLw7ssb691wNMVrABVHOJ2APu3cO2oHOABFF8LDk8lk3BSAJi4zZFoYj4Rjw==',key_name='tempest-TestStampPattern-1826930411',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:51:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='944395b4a11c4a9182fda518dc7bd2d8',ramdisk_id='',reservation_id='r-m0jh1bgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-358096571',owner_user_name='tempest-TestStampPattern-358096571-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:52:24Z,user_data=None,user_id='d6596329d9c842b78638fdbcf50b8ec8',uuid=d22b35e9-badc-40d1-952e-60cdfd60decb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "address": "fa:16:3e:10:2b:86", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d0ac82-b5", "ovs_interfaceid": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.214 2 DEBUG nova.network.os_vif_util [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Converting VIF {"id": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "address": "fa:16:3e:10:2b:86", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d0ac82-b5", "ovs_interfaceid": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.215 2 DEBUG nova.network.os_vif_util [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:2b:86,bridge_name='br-int',has_traffic_filtering=True,id=a6d0ac82-b500-4962-8bfd-d36ef3ba2948,network=Network(f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d0ac82-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.216 2 DEBUG os_vif [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:2b:86,bridge_name='br-int',has_traffic_filtering=True,id=a6d0ac82-b500-4962-8bfd-d36ef3ba2948,network=Network(f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d0ac82-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6d0ac82-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.224 2 INFO os_vif [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:2b:86,bridge_name='br-int',has_traffic_filtering=True,id=a6d0ac82-b500-4962-8bfd-d36ef3ba2948,network=Network(f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d0ac82-b5')#033[00m
Oct 10 23:53:29 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64ace6697da668597146157833bc20a7df2b1a93ed900f6b4a91dd115a16d1ba-userdata-shm.mount: Deactivated successfully.
Oct 10 23:53:29 np0005480824 systemd[1]: var-lib-containers-storage-overlay-2595de1d409fa70f4f87ad56ca9e3dfec7671a9510145d36a7f3cdba77f5e0ac-merged.mount: Deactivated successfully.
Oct 10 23:53:29 np0005480824 podman[285493]: 2025-10-11 03:53:29.258865113 +0000 UTC m=+0.105742403 container cleanup 64ace6697da668597146157833bc20a7df2b1a93ed900f6b4a91dd115a16d1ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 10 23:53:29 np0005480824 systemd[1]: libpod-conmon-64ace6697da668597146157833bc20a7df2b1a93ed900f6b4a91dd115a16d1ba.scope: Deactivated successfully.
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.279 2 DEBUG nova.compute.manager [req-1b017af6-0fb3-4b6e-9b94-bd206592b829 req-d1171691-b804-4741-bfbd-72636a5eaf96 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Received event network-changed-16c1f566-62ec-4bf8-ae0e-225e1fad3288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.280 2 DEBUG nova.compute.manager [req-1b017af6-0fb3-4b6e-9b94-bd206592b829 req-d1171691-b804-4741-bfbd-72636a5eaf96 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Refreshing instance network info cache due to event network-changed-16c1f566-62ec-4bf8-ae0e-225e1fad3288. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.280 2 DEBUG oslo_concurrency.lockutils [req-1b017af6-0fb3-4b6e-9b94-bd206592b829 req-d1171691-b804-4741-bfbd-72636a5eaf96 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-ba9c01f8-cb0e-4564-879e-fb3102e2e76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:53:29 np0005480824 podman[285548]: 2025-10-11 03:53:29.33073937 +0000 UTC m=+0.048372516 container remove 64ace6697da668597146157833bc20a7df2b1a93ed900f6b4a91dd115a16d1ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:53:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:29.342 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb5d7a8-158f-436e-8546-3a62e82c4335]: (4, ('Sat Oct 11 03:53:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e (64ace6697da668597146157833bc20a7df2b1a93ed900f6b4a91dd115a16d1ba)\n64ace6697da668597146157833bc20a7df2b1a93ed900f6b4a91dd115a16d1ba\nSat Oct 11 03:53:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e (64ace6697da668597146157833bc20a7df2b1a93ed900f6b4a91dd115a16d1ba)\n64ace6697da668597146157833bc20a7df2b1a93ed900f6b4a91dd115a16d1ba\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:29.344 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d9eecc49-f8ab-4da5-82f9-48944a6e40a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:29.345 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0e7e6a7-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:29 np0005480824 kernel: tapf0e7e6a7-10: left promiscuous mode
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:29.351 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[060f30a1-da74-48dc-868e-c9f8106e7b6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:29.380 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e27376-b3d8-4b9d-a0c2-a16d6eaf5343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:29.382 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[279b1f67-b9de-4786-a0bc-58042b1955c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:29 np0005480824 nova_compute[260089]: 2025-10-11 03:53:29.403 2 DEBUG nova.network.neutron [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:53:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:29.411 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[542c78a6-4a7a-4624-96b1-ba73884b9250]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416207, 'reachable_time': 22535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285569, 'error': None, 'target': 'ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:29 np0005480824 systemd[1]: run-netns-ovnmeta\x2df0e7e6a7\x2d1b58\x2d43b7\x2da4cd\x2d36a1a50fe57e.mount: Deactivated successfully.
Oct 10 23:53:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:29.416 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:53:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:29.416 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[07db9810-e58c-4420-8ba8-73a49eeae5da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:53:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Oct 10 23:53:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Oct 10 23:53:29 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Oct 10 23:53:30 np0005480824 nova_compute[260089]: 2025-10-11 03:53:30.065 2 INFO nova.virt.libvirt.driver [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Deleting instance files /var/lib/nova/instances/d22b35e9-badc-40d1-952e-60cdfd60decb_del#033[00m
Oct 10 23:53:30 np0005480824 nova_compute[260089]: 2025-10-11 03:53:30.066 2 INFO nova.virt.libvirt.driver [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Deletion of /var/lib/nova/instances/d22b35e9-badc-40d1-952e-60cdfd60decb_del complete#033[00m
Oct 10 23:53:30 np0005480824 nova_compute[260089]: 2025-10-11 03:53:30.114 2 INFO nova.compute.manager [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Took 1.17 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:53:30 np0005480824 nova_compute[260089]: 2025-10-11 03:53:30.115 2 DEBUG oslo.service.loopingcall [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:53:30 np0005480824 nova_compute[260089]: 2025-10-11 03:53:30.115 2 DEBUG nova.compute.manager [-] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:53:30 np0005480824 nova_compute[260089]: 2025-10-11 03:53:30.115 2 DEBUG nova.network.neutron [-] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:53:30 np0005480824 nova_compute[260089]: 2025-10-11 03:53:30.299 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:53:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 215 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 6.3 KiB/s wr, 105 op/s
Oct 10 23:53:30 np0005480824 nova_compute[260089]: 2025-10-11 03:53:30.812 2 DEBUG nova.network.neutron [-] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:53:30 np0005480824 nova_compute[260089]: 2025-10-11 03:53:30.840 2 INFO nova.compute.manager [-] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Took 0.72 seconds to deallocate network for instance.#033[00m
Oct 10 23:53:30 np0005480824 nova_compute[260089]: 2025-10-11 03:53:30.896 2 DEBUG oslo_concurrency.lockutils [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:30 np0005480824 nova_compute[260089]: 2025-10-11 03:53:30.897 2 DEBUG oslo_concurrency.lockutils [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.116 2 DEBUG oslo_concurrency.processutils [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.141 2 DEBUG nova.compute.manager [req-71d366b9-3beb-4748-8bab-76393027d6a6 req-11b172a4-e539-49c6-97b1-fce5b4e91ddd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Received event network-vif-deleted-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.250 2 DEBUG nova.network.neutron [req-3b6039e9-bd04-4e46-9c44-3fcaaafe32b6 req-e438c278-43e2-40db-a1f8-5b49bf2e9a16 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Updated VIF entry in instance network info cache for port a6d0ac82-b500-4962-8bfd-d36ef3ba2948. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.251 2 DEBUG nova.network.neutron [req-3b6039e9-bd04-4e46-9c44-3fcaaafe32b6 req-e438c278-43e2-40db-a1f8-5b49bf2e9a16 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Updating instance_info_cache with network_info: [{"id": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "address": "fa:16:3e:10:2b:86", "network": {"id": "f0e7e6a7-1b58-43b7-a4cd-36a1a50fe57e", "bridge": "br-int", "label": "tempest-TestStampPattern-337427362-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944395b4a11c4a9182fda518dc7bd2d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d0ac82-b5", "ovs_interfaceid": "a6d0ac82-b500-4962-8bfd-d36ef3ba2948", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.256 2 DEBUG nova.network.neutron [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Updating instance_info_cache with network_info: [{"id": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "address": "fa:16:3e:2e:c5:07", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c1f566-62", "ovs_interfaceid": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.271 2 DEBUG oslo_concurrency.lockutils [req-3b6039e9-bd04-4e46-9c44-3fcaaafe32b6 req-e438c278-43e2-40db-a1f8-5b49bf2e9a16 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-d22b35e9-badc-40d1-952e-60cdfd60decb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.273 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Releasing lock "refresh_cache-ba9c01f8-cb0e-4564-879e-fb3102e2e76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.273 2 DEBUG nova.compute.manager [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Instance network_info: |[{"id": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "address": "fa:16:3e:2e:c5:07", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c1f566-62", "ovs_interfaceid": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.274 2 DEBUG oslo_concurrency.lockutils [req-1b017af6-0fb3-4b6e-9b94-bd206592b829 req-d1171691-b804-4741-bfbd-72636a5eaf96 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-ba9c01f8-cb0e-4564-879e-fb3102e2e76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.274 2 DEBUG nova.network.neutron [req-1b017af6-0fb3-4b6e-9b94-bd206592b829 req-d1171691-b804-4741-bfbd-72636a5eaf96 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Refreshing network info cache for port 16c1f566-62ec-4bf8-ae0e-225e1fad3288 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.276 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Start _get_guest_xml network_info=[{"id": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "address": "fa:16:3e:2e:c5:07", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c1f566-62", "ovs_interfaceid": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '4c460dbe-0d3f-4712-9faf-b701e690c834', 'mount_device': '/dev/vda', 'delete_on_termination': True, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-be5dc6c3-9ee3-45f1-9e6c-1fecc35321b7', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'be5dc6c3-9ee3-45f1-9e6c-1fecc35321b7', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ba9c01f8-cb0e-4564-879e-fb3102e2e76a', 'attached_at': '', 'detached_at': '', 'volume_id': 'be5dc6c3-9ee3-45f1-9e6c-1fecc35321b7', 'serial': 'be5dc6c3-9ee3-45f1-9e6c-1fecc35321b7'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.281 2 WARNING nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.285 2 DEBUG nova.virt.libvirt.host [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.285 2 DEBUG nova.virt.libvirt.host [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.288 2 DEBUG nova.virt.libvirt.host [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.289 2 DEBUG nova.virt.libvirt.host [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.289 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.289 2 DEBUG nova.virt.hardware [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.290 2 DEBUG nova.virt.hardware [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.290 2 DEBUG nova.virt.hardware [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.291 2 DEBUG nova.virt.hardware [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.291 2 DEBUG nova.virt.hardware [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.291 2 DEBUG nova.virt.hardware [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.292 2 DEBUG nova.virt.hardware [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.292 2 DEBUG nova.virt.hardware [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.292 2 DEBUG nova.virt.hardware [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.292 2 DEBUG nova.virt.hardware [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.293 2 DEBUG nova.virt.hardware [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.320 2 DEBUG nova.storage.rbd_utils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image ba9c01f8-cb0e-4564-879e-fb3102e2e76a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.324 2 DEBUG oslo_concurrency.processutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.351 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.352 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.359 2 DEBUG nova.compute.manager [req-e2e54ae8-8f8e-4b8a-8541-7b695e7bef66 req-0474f0bb-4372-4b8e-a8bb-411d407c2916 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Received event network-vif-unplugged-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.360 2 DEBUG oslo_concurrency.lockutils [req-e2e54ae8-8f8e-4b8a-8541-7b695e7bef66 req-0474f0bb-4372-4b8e-a8bb-411d407c2916 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.360 2 DEBUG oslo_concurrency.lockutils [req-e2e54ae8-8f8e-4b8a-8541-7b695e7bef66 req-0474f0bb-4372-4b8e-a8bb-411d407c2916 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.360 2 DEBUG oslo_concurrency.lockutils [req-e2e54ae8-8f8e-4b8a-8541-7b695e7bef66 req-0474f0bb-4372-4b8e-a8bb-411d407c2916 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.360 2 DEBUG nova.compute.manager [req-e2e54ae8-8f8e-4b8a-8541-7b695e7bef66 req-0474f0bb-4372-4b8e-a8bb-411d407c2916 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] No waiting events found dispatching network-vif-unplugged-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.361 2 WARNING nova.compute.manager [req-e2e54ae8-8f8e-4b8a-8541-7b695e7bef66 req-0474f0bb-4372-4b8e-a8bb-411d407c2916 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Received unexpected event network-vif-unplugged-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.361 2 DEBUG nova.compute.manager [req-e2e54ae8-8f8e-4b8a-8541-7b695e7bef66 req-0474f0bb-4372-4b8e-a8bb-411d407c2916 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Received event network-vif-plugged-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.361 2 DEBUG oslo_concurrency.lockutils [req-e2e54ae8-8f8e-4b8a-8541-7b695e7bef66 req-0474f0bb-4372-4b8e-a8bb-411d407c2916 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.362 2 DEBUG oslo_concurrency.lockutils [req-e2e54ae8-8f8e-4b8a-8541-7b695e7bef66 req-0474f0bb-4372-4b8e-a8bb-411d407c2916 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.362 2 DEBUG oslo_concurrency.lockutils [req-e2e54ae8-8f8e-4b8a-8541-7b695e7bef66 req-0474f0bb-4372-4b8e-a8bb-411d407c2916 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.362 2 DEBUG nova.compute.manager [req-e2e54ae8-8f8e-4b8a-8541-7b695e7bef66 req-0474f0bb-4372-4b8e-a8bb-411d407c2916 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] No waiting events found dispatching network-vif-plugged-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.362 2 WARNING nova.compute.manager [req-e2e54ae8-8f8e-4b8a-8541-7b695e7bef66 req-0474f0bb-4372-4b8e-a8bb-411d407c2916 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Received unexpected event network-vif-plugged-a6d0ac82-b500-4962-8bfd-d36ef3ba2948 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 23:53:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:53:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/740309495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.578 2 DEBUG oslo_concurrency.processutils [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.585 2 DEBUG nova.compute.provider_tree [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.600 2 DEBUG nova.scheduler.client.report [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.619 2 DEBUG oslo_concurrency.lockutils [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.642 2 INFO nova.scheduler.client.report [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Deleted allocations for instance d22b35e9-badc-40d1-952e-60cdfd60decb#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.719 2 DEBUG oslo_concurrency.lockutils [None req-8085dc8c-b4aa-466b-9df5-618f611119eb d6596329d9c842b78638fdbcf50b8ec8 944395b4a11c4a9182fda518dc7bd2d8 - - default default] Lock "d22b35e9-badc-40d1-952e-60cdfd60decb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:53:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225840254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.777 2 DEBUG oslo_concurrency.processutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.800 2 DEBUG nova.virt.libvirt.vif [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:53:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-389630532',display_name='tempest-TestVolumeBootPattern-volume-backed-server-389630532',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-389630532',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNdsq0q6B8LLSTQOpXwgXtrUk68A/EZelLgWiyuKR8TpW9qyzq4tTNFzxDWNQ+8A+Y3cKPBcyFdStuqUeSJbmXMELun344mij5AlgaCiQijig8YhYJfvn1letXvyUQf2SA==',key_name='tempest-keypair-1172500857',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-r3bv8vh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:53:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='38ebc503771e417aaf1f3aea0c835994',uuid=ba9c01f8-cb0e-4564-879e-fb3102e2e76a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "address": "fa:16:3e:2e:c5:07", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c1f566-62", "ovs_interfaceid": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.800 2 DEBUG nova.network.os_vif_util [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "address": "fa:16:3e:2e:c5:07", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c1f566-62", "ovs_interfaceid": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.801 2 DEBUG nova.network.os_vif_util [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:c5:07,bridge_name='br-int',has_traffic_filtering=True,id=16c1f566-62ec-4bf8-ae0e-225e1fad3288,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16c1f566-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.803 2 DEBUG nova.objects.instance [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid ba9c01f8-cb0e-4564-879e-fb3102e2e76a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.816 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  <uuid>ba9c01f8-cb0e-4564-879e-fb3102e2e76a</uuid>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  <name>instance-0000000f</name>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-389630532</nova:name>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:53:31</nova:creationTime>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:        <nova:user uuid="38ebc503771e417aaf1f3aea0c835994">tempest-TestVolumeBootPattern-739984652-project-member</nova:user>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:        <nova:project uuid="55d21391a321476eb133317b3402b0f0">tempest-TestVolumeBootPattern-739984652</nova:project>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:        <nova:port uuid="16c1f566-62ec-4bf8-ae0e-225e1fad3288">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <entry name="serial">ba9c01f8-cb0e-4564-879e-fb3102e2e76a</entry>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <entry name="uuid">ba9c01f8-cb0e-4564-879e-fb3102e2e76a</entry>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/ba9c01f8-cb0e-4564-879e-fb3102e2e76a_disk.config">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-be5dc6c3-9ee3-45f1-9e6c-1fecc35321b7">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <serial>be5dc6c3-9ee3-45f1-9e6c-1fecc35321b7</serial>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:2e:c5:07"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <target dev="tap16c1f566-62"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/ba9c01f8-cb0e-4564-879e-fb3102e2e76a/console.log" append="off"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:53:31 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:53:31 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:53:31 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:53:31 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.817 2 DEBUG nova.compute.manager [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Preparing to wait for external event network-vif-plugged-16c1f566-62ec-4bf8-ae0e-225e1fad3288 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.817 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.818 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.818 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.818 2 DEBUG nova.virt.libvirt.vif [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:53:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-389630532',display_name='tempest-TestVolumeBootPattern-volume-backed-server-389630532',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-389630532',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNdsq0q6B8LLSTQOpXwgXtrUk68A/EZelLgWiyuKR8TpW9qyzq4tTNFzxDWNQ+8A+Y3cKPBcyFdStuqUeSJbmXMELun344mij5AlgaCiQijig8YhYJfvn1letXvyUQf2SA==',key_name='tempest-keypair-1172500857',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-r3bv8vh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:53:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='38ebc503771e417aaf1f3aea0c835994',uuid=ba9c01f8-cb0e-4564-879e-fb3102e2e76a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "address": "fa:16:3e:2e:c5:07", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c1f566-62", "ovs_interfaceid": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.819 2 DEBUG nova.network.os_vif_util [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "address": "fa:16:3e:2e:c5:07", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c1f566-62", "ovs_interfaceid": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.819 2 DEBUG nova.network.os_vif_util [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:c5:07,bridge_name='br-int',has_traffic_filtering=True,id=16c1f566-62ec-4bf8-ae0e-225e1fad3288,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16c1f566-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.819 2 DEBUG os_vif [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:c5:07,bridge_name='br-int',has_traffic_filtering=True,id=16c1f566-62ec-4bf8-ae0e-225e1fad3288,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16c1f566-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.821 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.821 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.825 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16c1f566-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.826 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap16c1f566-62, col_values=(('external_ids', {'iface-id': '16c1f566-62ec-4bf8-ae0e-225e1fad3288', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:c5:07', 'vm-uuid': 'ba9c01f8-cb0e-4564-879e-fb3102e2e76a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:53:31 np0005480824 NetworkManager[44969]: <info>  [1760154811.8298] manager: (tap16c1f566-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.836 2 INFO os_vif [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:c5:07,bridge_name='br-int',has_traffic_filtering=True,id=16c1f566-62ec-4bf8-ae0e-225e1fad3288,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16c1f566-62')#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.908 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.908 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.908 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No VIF found with MAC fa:16:3e:2e:c5:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.909 2 INFO nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Using config drive#033[00m
Oct 10 23:53:31 np0005480824 nova_compute[260089]: 2025-10-11 03:53:31.931 2 DEBUG nova.storage.rbd_utils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image ba9c01f8-cb0e-4564-879e-fb3102e2e76a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:53:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:53:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3723065650' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:53:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:53:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3723065650' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:53:32 np0005480824 nova_compute[260089]: 2025-10-11 03:53:32.318 2 INFO nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Creating config drive at /var/lib/nova/instances/ba9c01f8-cb0e-4564-879e-fb3102e2e76a/disk.config#033[00m
Oct 10 23:53:32 np0005480824 nova_compute[260089]: 2025-10-11 03:53:32.326 2 DEBUG oslo_concurrency.processutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ba9c01f8-cb0e-4564-879e-fb3102e2e76a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp63pqn72c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:32 np0005480824 nova_compute[260089]: 2025-10-11 03:53:32.483 2 DEBUG oslo_concurrency.processutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ba9c01f8-cb0e-4564-879e-fb3102e2e76a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp63pqn72c" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:32 np0005480824 nova_compute[260089]: 2025-10-11 03:53:32.532 2 DEBUG nova.storage.rbd_utils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image ba9c01f8-cb0e-4564-879e-fb3102e2e76a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:53:32 np0005480824 nova_compute[260089]: 2025-10-11 03:53:32.536 2 DEBUG oslo_concurrency.processutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ba9c01f8-cb0e-4564-879e-fb3102e2e76a/disk.config ba9c01f8-cb0e-4564-879e-fb3102e2e76a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:53:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/980871940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:53:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:53:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/980871940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:53:32 np0005480824 nova_compute[260089]: 2025-10-11 03:53:32.685 2 DEBUG oslo_concurrency.processutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ba9c01f8-cb0e-4564-879e-fb3102e2e76a/disk.config ba9c01f8-cb0e-4564-879e-fb3102e2e76a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:32 np0005480824 nova_compute[260089]: 2025-10-11 03:53:32.686 2 INFO nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Deleting local config drive /var/lib/nova/instances/ba9c01f8-cb0e-4564-879e-fb3102e2e76a/disk.config because it was imported into RBD.#033[00m
Oct 10 23:53:32 np0005480824 nova_compute[260089]: 2025-10-11 03:53:32.706 2 DEBUG nova.network.neutron [req-1b017af6-0fb3-4b6e-9b94-bd206592b829 req-d1171691-b804-4741-bfbd-72636a5eaf96 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Updated VIF entry in instance network info cache for port 16c1f566-62ec-4bf8-ae0e-225e1fad3288. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:53:32 np0005480824 nova_compute[260089]: 2025-10-11 03:53:32.707 2 DEBUG nova.network.neutron [req-1b017af6-0fb3-4b6e-9b94-bd206592b829 req-d1171691-b804-4741-bfbd-72636a5eaf96 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Updating instance_info_cache with network_info: [{"id": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "address": "fa:16:3e:2e:c5:07", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c1f566-62", "ovs_interfaceid": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:53:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 136 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 8.2 KiB/s wr, 160 op/s
Oct 10 23:53:32 np0005480824 nova_compute[260089]: 2025-10-11 03:53:32.725 2 DEBUG oslo_concurrency.lockutils [req-1b017af6-0fb3-4b6e-9b94-bd206592b829 req-d1171691-b804-4741-bfbd-72636a5eaf96 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-ba9c01f8-cb0e-4564-879e-fb3102e2e76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:53:32 np0005480824 kernel: tap16c1f566-62: entered promiscuous mode
Oct 10 23:53:32 np0005480824 NetworkManager[44969]: <info>  [1760154812.7604] manager: (tap16c1f566-62): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Oct 10 23:53:32 np0005480824 nova_compute[260089]: 2025-10-11 03:53:32.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:32 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:32Z|00139|binding|INFO|Claiming lport 16c1f566-62ec-4bf8-ae0e-225e1fad3288 for this chassis.
Oct 10 23:53:32 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:32Z|00140|binding|INFO|16c1f566-62ec-4bf8-ae0e-225e1fad3288: Claiming fa:16:3e:2e:c5:07 10.100.0.8
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.769 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:c5:07 10.100.0.8'], port_security=['fa:16:3e:2e:c5:07 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba9c01f8-cb0e-4564-879e-fb3102e2e76a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d21391a321476eb133317b3402b0f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b740a105-f534-494b-b496-8cac5be77a8c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d98e64fb-092d-4777-b741-426f3e849bc3, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=16c1f566-62ec-4bf8-ae0e-225e1fad3288) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.771 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 16c1f566-62ec-4bf8-ae0e-225e1fad3288 in datapath 359720eb-a957-4bcd-b9b2-3cf7dad947e4 bound to our chassis#033[00m
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.773 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 359720eb-a957-4bcd-b9b2-3cf7dad947e4#033[00m
Oct 10 23:53:32 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:32Z|00141|binding|INFO|Setting lport 16c1f566-62ec-4bf8-ae0e-225e1fad3288 ovn-installed in OVS
Oct 10 23:53:32 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:32Z|00142|binding|INFO|Setting lport 16c1f566-62ec-4bf8-ae0e-225e1fad3288 up in Southbound
Oct 10 23:53:32 np0005480824 nova_compute[260089]: 2025-10-11 03:53:32.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:32 np0005480824 nova_compute[260089]: 2025-10-11 03:53:32.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.788 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2273b956-5898-440b-85b2-d5f93cd9f081]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.789 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap359720eb-a1 in ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.795 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap359720eb-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.796 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ff938539-9fc9-48c9-ad6e-777764f79aba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.798 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb4c366-3b16-45fc-adfb-690091d17e26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.815 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[6595d41c-d353-4e8d-b274-6df9ed49146d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:32 np0005480824 systemd-machined[215071]: New machine qemu-15-instance-0000000f.
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.836 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[426664ca-d5e0-4f9d-ad53-f450815fe7a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:32 np0005480824 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Oct 10 23:53:32 np0005480824 systemd-udevd[285709]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:53:32 np0005480824 NetworkManager[44969]: <info>  [1760154812.8604] device (tap16c1f566-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:53:32 np0005480824 NetworkManager[44969]: <info>  [1760154812.8612] device (tap16c1f566-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.883 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[776dd222-72d4-4dfa-b0ac-59cbda7dfa3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.888 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[af249c90-1530-4801-8231-99229504f302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:32 np0005480824 NetworkManager[44969]: <info>  [1760154812.8932] manager: (tap359720eb-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.924 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[0e71adf4-150d-41ef-acdc-b21117083e99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.928 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[f9233edd-dca8-4025-9db7-b653bc36f326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:32 np0005480824 NetworkManager[44969]: <info>  [1760154812.9595] device (tap359720eb-a0): carrier: link connected
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.968 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[9e10a9e9-04de-4e8e-b74b-a6e444cd2e3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:32.994 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[544fffeb-d076-4744-ad47-ca87a6a97f1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap359720eb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:90:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428062, 'reachable_time': 19294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285739, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:33.016 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[77075902-b222-413a-9d07-1df917797037]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:90b3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428062, 'tstamp': 428062}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285740, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:33.036 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[69a8622a-8281-436e-88db-9875b56169bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap359720eb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:90:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428062, 'reachable_time': 19294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285741, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:33.076 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f04bb3-c64b-4061-999d-2a68a8a10b96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:33.162 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d07507e9-2063-4e64-94d6-bca804f17f68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:33.163 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359720eb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:33.164 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:33.164 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap359720eb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:33 np0005480824 NetworkManager[44969]: <info>  [1760154813.1679] manager: (tap359720eb-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Oct 10 23:53:33 np0005480824 kernel: tap359720eb-a0: entered promiscuous mode
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:33.170 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap359720eb-a0, col_values=(('external_ids', {'iface-id': '039c7668-0b85-4466-9c66-62531405028d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:33 np0005480824 nova_compute[260089]: 2025-10-11 03:53:33.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:33 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:33Z|00143|binding|INFO|Releasing lport 039c7668-0b85-4466-9c66-62531405028d from this chassis (sb_readonly=0)
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:33.191 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:53:33 np0005480824 nova_compute[260089]: 2025-10-11 03:53:33.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:33.193 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[76360287-33e9-4206-bf59-c533629930ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:33.193 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-359720eb-a957-4bcd-b9b2-3cf7dad947e4
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 359720eb-a957-4bcd-b9b2-3cf7dad947e4
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:53:33 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:33.195 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'env', 'PROCESS_TAG=haproxy-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/359720eb-a957-4bcd-b9b2-3cf7dad947e4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:53:33 np0005480824 nova_compute[260089]: 2025-10-11 03:53:33.259 2 DEBUG nova.compute.manager [req-c7e54197-e6d7-41de-b550-4995bde7bcf4 req-25480180-fc99-4f4a-a037-474ef16474fd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Received event network-vif-plugged-16c1f566-62ec-4bf8-ae0e-225e1fad3288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:33 np0005480824 nova_compute[260089]: 2025-10-11 03:53:33.260 2 DEBUG oslo_concurrency.lockutils [req-c7e54197-e6d7-41de-b550-4995bde7bcf4 req-25480180-fc99-4f4a-a037-474ef16474fd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:33 np0005480824 nova_compute[260089]: 2025-10-11 03:53:33.260 2 DEBUG oslo_concurrency.lockutils [req-c7e54197-e6d7-41de-b550-4995bde7bcf4 req-25480180-fc99-4f4a-a037-474ef16474fd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:33 np0005480824 nova_compute[260089]: 2025-10-11 03:53:33.261 2 DEBUG oslo_concurrency.lockutils [req-c7e54197-e6d7-41de-b550-4995bde7bcf4 req-25480180-fc99-4f4a-a037-474ef16474fd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:33 np0005480824 nova_compute[260089]: 2025-10-11 03:53:33.261 2 DEBUG nova.compute.manager [req-c7e54197-e6d7-41de-b550-4995bde7bcf4 req-25480180-fc99-4f4a-a037-474ef16474fd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Processing event network-vif-plugged-16c1f566-62ec-4bf8-ae0e-225e1fad3288 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:53:33 np0005480824 nova_compute[260089]: 2025-10-11 03:53:33.303 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:53:33 np0005480824 nova_compute[260089]: 2025-10-11 03:53:33.303 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:53:33 np0005480824 nova_compute[260089]: 2025-10-11 03:53:33.303 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:53:33 np0005480824 nova_compute[260089]: 2025-10-11 03:53:33.304 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:53:33 np0005480824 nova_compute[260089]: 2025-10-11 03:53:33.321 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 10 23:53:33 np0005480824 nova_compute[260089]: 2025-10-11 03:53:33.321 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 23:53:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:53:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3962164342' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:53:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:53:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3962164342' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:53:33 np0005480824 podman[285773]: 2025-10-11 03:53:33.602398541 +0000 UTC m=+0.061736910 container create 608e6ef85b1f6976b361ecde87ef91721673e5300f5cff68574bd66926f93d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 10 23:53:33 np0005480824 systemd[1]: Started libpod-conmon-608e6ef85b1f6976b361ecde87ef91721673e5300f5cff68574bd66926f93d83.scope.
Oct 10 23:53:33 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:53:33 np0005480824 podman[285773]: 2025-10-11 03:53:33.567886042 +0000 UTC m=+0.027224461 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:53:33 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68228903e1eedf2a283aebf45c9d1ef49e682cc67396be3c8ddafbc2f2570ea4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:53:33 np0005480824 podman[285773]: 2025-10-11 03:53:33.674465134 +0000 UTC m=+0.133803533 container init 608e6ef85b1f6976b361ecde87ef91721673e5300f5cff68574bd66926f93d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 10 23:53:33 np0005480824 podman[285773]: 2025-10-11 03:53:33.679955543 +0000 UTC m=+0.139293912 container start 608e6ef85b1f6976b361ecde87ef91721673e5300f5cff68574bd66926f93d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:53:33 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[285827]: [NOTICE]   (285845) : New worker (285853) forked
Oct 10 23:53:33 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[285827]: [NOTICE]   (285845) : Loading success.
Oct 10 23:53:33 np0005480824 podman[285821]: 2025-10-11 03:53:33.741502148 +0000 UTC m=+0.092761340 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 23:53:33 np0005480824 nova_compute[260089]: 2025-10-11 03:53:33.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.134 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154814.1334896, ba9c01f8-cb0e-4564-879e-fb3102e2e76a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.134 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] VM Started (Lifecycle Event)#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.137 2 DEBUG nova.compute.manager [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.140 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.145 2 INFO nova.virt.libvirt.driver [-] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Instance spawned successfully.#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.145 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.156 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.161 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.164 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.165 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.165 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.166 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.166 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.166 2 DEBUG nova.virt.libvirt.driver [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.199 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.200 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154814.1336818, ba9c01f8-cb0e-4564-879e-fb3102e2e76a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.200 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.226 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.230 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154814.139766, ba9c01f8-cb0e-4564-879e-fb3102e2e76a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.230 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.237 2 INFO nova.compute.manager [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Took 6.45 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.237 2 DEBUG nova.compute.manager [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.246 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.249 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.277 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.294 2 INFO nova.compute.manager [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Took 8.50 seconds to build instance.#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.307 2 DEBUG oslo_concurrency.lockutils [None req-9c217870-1f0a-45fd-b87e-0851bc3ab84c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.320 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.320 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.320 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.321 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.321 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 136 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 7.0 KiB/s wr, 136 op/s
Oct 10 23:53:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:53:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Oct 10 23:53:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Oct 10 23:53:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Oct 10 23:53:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:53:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2533638795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.794 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.869 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:53:34 np0005480824 nova_compute[260089]: 2025-10-11 03:53:34.870 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:53:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:53:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/965160790' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:53:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:53:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/965160790' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.048 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.049 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4419MB free_disk=59.988277435302734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.049 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.050 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.114 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance ba9c01f8-cb0e-4564-879e-fb3102e2e76a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.115 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.115 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.148 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.356 2 DEBUG nova.compute.manager [req-bdbe9a50-f330-402a-8962-7bea5dfb1949 req-5105c657-01f4-49f2-97f4-bcca610abd95 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Received event network-vif-plugged-16c1f566-62ec-4bf8-ae0e-225e1fad3288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.357 2 DEBUG oslo_concurrency.lockutils [req-bdbe9a50-f330-402a-8962-7bea5dfb1949 req-5105c657-01f4-49f2-97f4-bcca610abd95 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.357 2 DEBUG oslo_concurrency.lockutils [req-bdbe9a50-f330-402a-8962-7bea5dfb1949 req-5105c657-01f4-49f2-97f4-bcca610abd95 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.358 2 DEBUG oslo_concurrency.lockutils [req-bdbe9a50-f330-402a-8962-7bea5dfb1949 req-5105c657-01f4-49f2-97f4-bcca610abd95 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.358 2 DEBUG nova.compute.manager [req-bdbe9a50-f330-402a-8962-7bea5dfb1949 req-5105c657-01f4-49f2-97f4-bcca610abd95 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] No waiting events found dispatching network-vif-plugged-16c1f566-62ec-4bf8-ae0e-225e1fad3288 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.358 2 WARNING nova.compute.manager [req-bdbe9a50-f330-402a-8962-7bea5dfb1949 req-5105c657-01f4-49f2-97f4-bcca610abd95 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Received unexpected event network-vif-plugged-16c1f566-62ec-4bf8-ae0e-225e1fad3288 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:53:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:53:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3453764162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.605 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.614 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.636 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.655 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:53:35 np0005480824 nova_compute[260089]: 2025-10-11 03:53:35.655 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:53:36 np0005480824 nova_compute[260089]: 2025-10-11 03:53:36.200 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154801.199094, 3b8741f5-afdc-4745-b74c-2578bc643be4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:53:36 np0005480824 nova_compute[260089]: 2025-10-11 03:53:36.201 2 INFO nova.compute.manager [-] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:53:36 np0005480824 nova_compute[260089]: 2025-10-11 03:53:36.227 2 DEBUG nova.compute.manager [None req-b2fe9cc8-5b2e-4a2f-82a7-05f090aca1fb - - - - - -] [instance: 3b8741f5-afdc-4745-b74c-2578bc643be4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:53:36 np0005480824 nova_compute[260089]: 2025-10-11 03:53:36.656 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:53:36 np0005480824 nova_compute[260089]: 2025-10-11 03:53:36.675 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:53:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 136 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.2 KiB/s wr, 57 op/s
Oct 10 23:53:36 np0005480824 nova_compute[260089]: 2025-10-11 03:53:36.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:37 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:37Z|00144|binding|INFO|Releasing lport 039c7668-0b85-4466-9c66-62531405028d from this chassis (sb_readonly=0)
Oct 10 23:53:37 np0005480824 nova_compute[260089]: 2025-10-11 03:53:37.380 2 DEBUG nova.compute.manager [req-793f1961-e095-4996-bf6b-14e5691210ed req-7d74b50a-4143-4fc9-b84b-933cc6313025 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Received event network-changed-16c1f566-62ec-4bf8-ae0e-225e1fad3288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:53:37 np0005480824 nova_compute[260089]: 2025-10-11 03:53:37.380 2 DEBUG nova.compute.manager [req-793f1961-e095-4996-bf6b-14e5691210ed req-7d74b50a-4143-4fc9-b84b-933cc6313025 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Refreshing instance network info cache due to event network-changed-16c1f566-62ec-4bf8-ae0e-225e1fad3288. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:53:37 np0005480824 nova_compute[260089]: 2025-10-11 03:53:37.381 2 DEBUG oslo_concurrency.lockutils [req-793f1961-e095-4996-bf6b-14e5691210ed req-7d74b50a-4143-4fc9-b84b-933cc6313025 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-ba9c01f8-cb0e-4564-879e-fb3102e2e76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:53:37 np0005480824 nova_compute[260089]: 2025-10-11 03:53:37.381 2 DEBUG oslo_concurrency.lockutils [req-793f1961-e095-4996-bf6b-14e5691210ed req-7d74b50a-4143-4fc9-b84b-933cc6313025 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-ba9c01f8-cb0e-4564-879e-fb3102e2e76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:53:37 np0005480824 nova_compute[260089]: 2025-10-11 03:53:37.381 2 DEBUG nova.network.neutron [req-793f1961-e095-4996-bf6b-14e5691210ed req-7d74b50a-4143-4fc9-b84b-933cc6313025 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Refreshing network info cache for port 16c1f566-62ec-4bf8-ae0e-225e1fad3288 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:53:37 np0005480824 nova_compute[260089]: 2025-10-11 03:53:37.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0007256335669401576 of space, bias 1.0, pg target 0.2176900700820473 quantized to 32 (current 32)
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:53:37 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:53:38 np0005480824 nova_compute[260089]: 2025-10-11 03:53:38.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:53:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 134 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 23 KiB/s wr, 211 op/s
Oct 10 23:53:38 np0005480824 nova_compute[260089]: 2025-10-11 03:53:38.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:38 np0005480824 nova_compute[260089]: 2025-10-11 03:53:38.884 2 DEBUG nova.network.neutron [req-793f1961-e095-4996-bf6b-14e5691210ed req-7d74b50a-4143-4fc9-b84b-933cc6313025 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Updated VIF entry in instance network info cache for port 16c1f566-62ec-4bf8-ae0e-225e1fad3288. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:53:38 np0005480824 nova_compute[260089]: 2025-10-11 03:53:38.885 2 DEBUG nova.network.neutron [req-793f1961-e095-4996-bf6b-14e5691210ed req-7d74b50a-4143-4fc9-b84b-933cc6313025 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Updating instance_info_cache with network_info: [{"id": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "address": "fa:16:3e:2e:c5:07", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c1f566-62", "ovs_interfaceid": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:53:38 np0005480824 nova_compute[260089]: 2025-10-11 03:53:38.903 2 DEBUG oslo_concurrency.lockutils [req-793f1961-e095-4996-bf6b-14e5691210ed req-7d74b50a-4143-4fc9-b84b-933cc6313025 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-ba9c01f8-cb0e-4564-879e-fb3102e2e76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:53:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:53:39 np0005480824 podman[285917]: 2025-10-11 03:53:39.988371913 +0000 UTC m=+0.048377986 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 10 23:53:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 134 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 189 op/s
Oct 10 23:53:40 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:40Z|00145|binding|INFO|Releasing lport 039c7668-0b85-4466-9c66-62531405028d from this chassis (sb_readonly=0)
Oct 10 23:53:40 np0005480824 nova_compute[260089]: 2025-10-11 03:53:40.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:41 np0005480824 nova_compute[260089]: 2025-10-11 03:53:41.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:41.414 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:53:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:41.415 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:53:41 np0005480824 nova_compute[260089]: 2025-10-11 03:53:41.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 134 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 143 op/s
Oct 10 23:53:42 np0005480824 nova_compute[260089]: 2025-10-11 03:53:42.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:43 np0005480824 nova_compute[260089]: 2025-10-11 03:53:43.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:44 np0005480824 nova_compute[260089]: 2025-10-11 03:53:44.185 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154809.1841145, d22b35e9-badc-40d1-952e-60cdfd60decb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:53:44 np0005480824 nova_compute[260089]: 2025-10-11 03:53:44.186 2 INFO nova.compute.manager [-] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:53:44 np0005480824 nova_compute[260089]: 2025-10-11 03:53:44.202 2 DEBUG nova.compute.manager [None req-c9319ac2-8e94-49ba-87e3-d5d26b0e384c - - - - - -] [instance: d22b35e9-badc-40d1-952e-60cdfd60decb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:53:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:53:44.417 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:53:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 134 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 143 op/s
Oct 10 23:53:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:53:45 np0005480824 nova_compute[260089]: 2025-10-11 03:53:45.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:46 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:46Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:c5:07 10.100.0.8
Oct 10 23:53:46 np0005480824 ovn_controller[152667]: 2025-10-11T03:53:46Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:c5:07 10.100.0.8
Oct 10 23:53:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 135 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 286 KiB/s wr, 137 op/s
Oct 10 23:53:46 np0005480824 nova_compute[260089]: 2025-10-11 03:53:46.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 160 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Oct 10 23:53:48 np0005480824 nova_compute[260089]: 2025-10-11 03:53:48.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:53:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3782460454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:53:49 np0005480824 nova_compute[260089]: 2025-10-11 03:53:49.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:53:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Oct 10 23:53:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Oct 10 23:53:50 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Oct 10 23:53:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 160 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 386 KiB/s rd, 2.5 MiB/s wr, 61 op/s
Oct 10 23:53:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Oct 10 23:53:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Oct 10 23:53:51 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Oct 10 23:53:51 np0005480824 nova_compute[260089]: 2025-10-11 03:53:51.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 167 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 592 KiB/s rd, 3.2 MiB/s wr, 123 op/s
Oct 10 23:53:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Oct 10 23:53:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Oct 10 23:53:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Oct 10 23:53:53 np0005480824 nova_compute[260089]: 2025-10-11 03:53:53.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:54 np0005480824 nova_compute[260089]: 2025-10-11 03:53:54.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 167 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 170 KiB/s wr, 61 op/s
Oct 10 23:53:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:53:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Oct 10 23:53:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Oct 10 23:53:56 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Oct 10 23:53:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:53:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/990641184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:53:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 167 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 195 KiB/s wr, 97 op/s
Oct 10 23:53:56 np0005480824 nova_compute[260089]: 2025-10-11 03:53:56.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:57 np0005480824 podman[285938]: 2025-10-11 03:53:57.020034907 +0000 UTC m=+0.070721922 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 23:53:57 np0005480824 podman[285939]: 2025-10-11 03:53:57.033705397 +0000 UTC m=+0.081956735 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 10 23:53:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Oct 10 23:53:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Oct 10 23:53:57 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Oct 10 23:53:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:53:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:53:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:53:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:53:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:53:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:53:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Oct 10 23:53:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 167 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 30 KiB/s wr, 68 op/s
Oct 10 23:53:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Oct 10 23:53:58 np0005480824 nova_compute[260089]: 2025-10-11 03:53:58.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:53:58 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Oct 10 23:53:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:53:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Oct 10 23:53:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Oct 10 23:53:59 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.013 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Acquiring lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.014 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.027 2 DEBUG nova.compute.manager [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.126 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.127 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.132 2 DEBUG nova.virt.hardware [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.133 2 INFO nova.compute.claims [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.244 2 DEBUG oslo_concurrency.processutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:54:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2003998945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.690 2 DEBUG oslo_concurrency.processutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.698 2 DEBUG nova.compute.provider_tree [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.714 2 DEBUG nova.scheduler.client.report [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:54:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 167 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 41 KiB/s wr, 92 op/s
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.736 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.737 2 DEBUG nova.compute.manager [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.778 2 DEBUG nova.compute.manager [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.779 2 DEBUG nova.network.neutron [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.805 2 INFO nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.821 2 DEBUG nova.compute.manager [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.897 2 DEBUG nova.compute.manager [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.898 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.898 2 INFO nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Creating image(s)#033[00m
Oct 10 23:54:00 np0005480824 nova_compute[260089]: 2025-10-11 03:54:00.956 2 DEBUG nova.storage.rbd_utils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] rbd image 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:54:01 np0005480824 nova_compute[260089]: 2025-10-11 03:54:01.071 2 DEBUG nova.storage.rbd_utils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] rbd image 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:54:01 np0005480824 nova_compute[260089]: 2025-10-11 03:54:01.114 2 DEBUG nova.storage.rbd_utils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] rbd image 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:54:01 np0005480824 nova_compute[260089]: 2025-10-11 03:54:01.120 2 DEBUG oslo_concurrency.processutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:01 np0005480824 nova_compute[260089]: 2025-10-11 03:54:01.198 2 DEBUG oslo_concurrency.processutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:01 np0005480824 nova_compute[260089]: 2025-10-11 03:54:01.199 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:01 np0005480824 nova_compute[260089]: 2025-10-11 03:54:01.200 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:01 np0005480824 nova_compute[260089]: 2025-10-11 03:54:01.201 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:01 np0005480824 nova_compute[260089]: 2025-10-11 03:54:01.230 2 DEBUG nova.storage.rbd_utils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] rbd image 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:54:01 np0005480824 nova_compute[260089]: 2025-10-11 03:54:01.237 2 DEBUG oslo_concurrency.processutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:01 np0005480824 nova_compute[260089]: 2025-10-11 03:54:01.432 2 DEBUG nova.policy [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f24819cdb3ee4b1f8a4a9e811a760a2c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bc155be8024d49b0ab4279dfca944e7d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:54:01 np0005480824 nova_compute[260089]: 2025-10-11 03:54:01.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Oct 10 23:54:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Oct 10 23:54:02 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Oct 10 23:54:02 np0005480824 nova_compute[260089]: 2025-10-11 03:54:02.569 2 DEBUG oslo_concurrency.processutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:02 np0005480824 nova_compute[260089]: 2025-10-11 03:54:02.661 2 DEBUG nova.storage.rbd_utils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] resizing rbd image 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 23:54:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 176 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 661 KiB/s wr, 78 op/s
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.182 2 DEBUG nova.objects.instance [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lazy-loading 'migration_context' on Instance uuid 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.203 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.204 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Ensure instance console log exists: /var/lib/nova/instances/6fc56e59-9278-4ac2-89ed-ca93f2f17d1d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.205 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.206 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.206 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.584 2 DEBUG nova.network.neutron [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Successfully created port: 96788aff-c48f-4de5-a500-c62a76db51e3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.798 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "17e293fc-58db-41da-a59c-d4a11dcbe09e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.799 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.813 2 DEBUG nova.compute.manager [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.885 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.886 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.895 2 DEBUG nova.virt.hardware [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:54:03 np0005480824 nova_compute[260089]: 2025-10-11 03:54:03.895 2 INFO nova.compute.claims [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.043 2 DEBUG oslo_concurrency.processutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:04 np0005480824 podman[286168]: 2025-10-11 03:54:04.051299497 +0000 UTC m=+0.107428842 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.273 2 DEBUG nova.network.neutron [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Successfully updated port: 96788aff-c48f-4de5-a500-c62a76db51e3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.291 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Acquiring lock "refresh_cache-6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.291 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Acquired lock "refresh_cache-6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.291 2 DEBUG nova.network.neutron [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.368 2 DEBUG nova.compute.manager [req-e196a346-1c41-4bee-9641-2888b9f18168 req-bfa8f8f1-d90e-41b6-baec-af7714716af1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Received event network-changed-96788aff-c48f-4de5-a500-c62a76db51e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.369 2 DEBUG nova.compute.manager [req-e196a346-1c41-4bee-9641-2888b9f18168 req-bfa8f8f1-d90e-41b6-baec-af7714716af1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Refreshing instance network info cache due to event network-changed-96788aff-c48f-4de5-a500-c62a76db51e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.369 2 DEBUG oslo_concurrency.lockutils [req-e196a346-1c41-4bee-9641-2888b9f18168 req-bfa8f8f1-d90e-41b6-baec-af7714716af1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.470 2 DEBUG nova.network.neutron [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:54:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:54:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2892659232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.509 2 DEBUG oslo_concurrency.processutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.514 2 DEBUG nova.compute.provider_tree [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.544 2 DEBUG nova.scheduler.client.report [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.567 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.568 2 DEBUG nova.compute.manager [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.617 2 INFO nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.619 2 DEBUG nova.compute.manager [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.620 2 DEBUG nova.network.neutron [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.641 2 DEBUG nova.compute.manager [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:54:04 np0005480824 nova_compute[260089]: 2025-10-11 03:54:04.690 2 INFO nova.virt.block_device [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Booting with volume snapshot 40c98c75-8efe-4c60-baea-078725b213f8 at /dev/vda#033[00m
Oct 10 23:54:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 176 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 564 KiB/s wr, 67 op/s
Oct 10 23:54:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.153 2 DEBUG nova.policy [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38ebc503771e417aaf1f3aea0c835994', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55d21391a321476eb133317b3402b0f0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.525 2 DEBUG nova.network.neutron [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Updating instance_info_cache with network_info: [{"id": "96788aff-c48f-4de5-a500-c62a76db51e3", "address": "fa:16:3e:46:26:da", "network": {"id": "5bb06f57-fdf3-4bab-b3b4-81f9264d8f31", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1614333098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc155be8024d49b0ab4279dfca944e7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96788aff-c4", "ovs_interfaceid": "96788aff-c48f-4de5-a500-c62a76db51e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.554 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Releasing lock "refresh_cache-6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.554 2 DEBUG nova.compute.manager [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Instance network_info: |[{"id": "96788aff-c48f-4de5-a500-c62a76db51e3", "address": "fa:16:3e:46:26:da", "network": {"id": "5bb06f57-fdf3-4bab-b3b4-81f9264d8f31", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1614333098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc155be8024d49b0ab4279dfca944e7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96788aff-c4", "ovs_interfaceid": "96788aff-c48f-4de5-a500-c62a76db51e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.555 2 DEBUG oslo_concurrency.lockutils [req-e196a346-1c41-4bee-9641-2888b9f18168 req-bfa8f8f1-d90e-41b6-baec-af7714716af1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.555 2 DEBUG nova.network.neutron [req-e196a346-1c41-4bee-9641-2888b9f18168 req-bfa8f8f1-d90e-41b6-baec-af7714716af1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Refreshing network info cache for port 96788aff-c48f-4de5-a500-c62a76db51e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.558 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Start _get_guest_xml network_info=[{"id": "96788aff-c48f-4de5-a500-c62a76db51e3", "address": "fa:16:3e:46:26:da", "network": {"id": "5bb06f57-fdf3-4bab-b3b4-81f9264d8f31", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1614333098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc155be8024d49b0ab4279dfca944e7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96788aff-c4", "ovs_interfaceid": "96788aff-c48f-4de5-a500-c62a76db51e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.562 2 WARNING nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.569 2 DEBUG nova.virt.libvirt.host [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.570 2 DEBUG nova.virt.libvirt.host [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.575 2 DEBUG nova.virt.libvirt.host [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.576 2 DEBUG nova.virt.libvirt.host [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.576 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.576 2 DEBUG nova.virt.hardware [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.577 2 DEBUG nova.virt.hardware [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.577 2 DEBUG nova.virt.hardware [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.577 2 DEBUG nova.virt.hardware [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.577 2 DEBUG nova.virt.hardware [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.577 2 DEBUG nova.virt.hardware [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.578 2 DEBUG nova.virt.hardware [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.578 2 DEBUG nova.virt.hardware [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.578 2 DEBUG nova.virt.hardware [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.578 2 DEBUG nova.virt.hardware [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.578 2 DEBUG nova.virt.hardware [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:54:05 np0005480824 nova_compute[260089]: 2025-10-11 03:54:05.581 2 DEBUG oslo_concurrency.processutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/411780187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.075 2 DEBUG oslo_concurrency.processutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.102 2 DEBUG nova.storage.rbd_utils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] rbd image 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.106 2 DEBUG oslo_concurrency.processutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.297 2 DEBUG nova.network.neutron [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Successfully created port: fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:54:06 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 314f27d6-5f7d-4f92-ae66-75a1701c9b15 does not exist
Oct 10 23:54:06 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 8eb3a8c2-b315-454d-abc0-a314167c3def does not exist
Oct 10 23:54:06 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev f42ff461-b825-4efb-a532-6dae40432360 does not exist
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2646030553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.537 2 DEBUG oslo_concurrency.processutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.539 2 DEBUG nova.virt.libvirt.vif [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-239614500',display_name='tempest-TestEncryptedCinderVolumes-server-239614500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-239614500',id=16,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMwxG/zesopBoPVz/9rAVJe3A5xc8Hswv45IHelamhTcP5G1hd1+D+iZm+B8qAqlvTb69iH7x/3vOfviPjx+iwLDGXWTBUSEGeDUceEgUvv2oMFHBA+QIfr5/C1y+DQYKw==',key_name='tempest-keypair-2024620623',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bc155be8024d49b0ab4279dfca944e7d',ramdisk_id='',reservation_id='r-2b97b1ft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1105996806',owner_user_name='tempest-TestEncryptedCinderVolumes-1105996806-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:54:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f24819cdb3ee4b1f8a4a9e811a760a2c',uuid=6fc56e59-9278-4ac2-89ed-ca93f2f17d1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96788aff-c48f-4de5-a500-c62a76db51e3", "address": "fa:16:3e:46:26:da", "network": {"id": "5bb06f57-fdf3-4bab-b3b4-81f9264d8f31", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1614333098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc155be8024d49b0ab4279dfca944e7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96788aff-c4", "ovs_interfaceid": "96788aff-c48f-4de5-a500-c62a76db51e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.539 2 DEBUG nova.network.os_vif_util [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Converting VIF {"id": "96788aff-c48f-4de5-a500-c62a76db51e3", "address": "fa:16:3e:46:26:da", "network": {"id": "5bb06f57-fdf3-4bab-b3b4-81f9264d8f31", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1614333098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc155be8024d49b0ab4279dfca944e7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96788aff-c4", "ovs_interfaceid": "96788aff-c48f-4de5-a500-c62a76db51e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.540 2 DEBUG nova.network.os_vif_util [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:26:da,bridge_name='br-int',has_traffic_filtering=True,id=96788aff-c48f-4de5-a500-c62a76db51e3,network=Network(5bb06f57-fdf3-4bab-b3b4-81f9264d8f31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96788aff-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.541 2 DEBUG nova.objects.instance [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lazy-loading 'pci_devices' on Instance uuid 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.597 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  <uuid>6fc56e59-9278-4ac2-89ed-ca93f2f17d1d</uuid>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  <name>instance-00000010</name>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-239614500</nova:name>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:54:05</nova:creationTime>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:        <nova:user uuid="f24819cdb3ee4b1f8a4a9e811a760a2c">tempest-TestEncryptedCinderVolumes-1105996806-project-member</nova:user>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:        <nova:project uuid="bc155be8024d49b0ab4279dfca944e7d">tempest-TestEncryptedCinderVolumes-1105996806</nova:project>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:        <nova:port uuid="96788aff-c48f-4de5-a500-c62a76db51e3">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <entry name="serial">6fc56e59-9278-4ac2-89ed-ca93f2f17d1d</entry>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <entry name="uuid">6fc56e59-9278-4ac2-89ed-ca93f2f17d1d</entry>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_disk">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_disk.config">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:46:26:da"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <target dev="tap96788aff-c4"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/6fc56e59-9278-4ac2-89ed-ca93f2f17d1d/console.log" append="off"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:54:06 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:54:06 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:54:06 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:54:06 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.598 2 DEBUG nova.compute.manager [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Preparing to wait for external event network-vif-plugged-96788aff-c48f-4de5-a500-c62a76db51e3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.598 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Acquiring lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.598 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.598 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.604 2 DEBUG nova.virt.libvirt.vif [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-239614500',display_name='tempest-TestEncryptedCinderVolumes-server-239614500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-239614500',id=16,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMwxG/zesopBoPVz/9rAVJe3A5xc8Hswv45IHelamhTcP5G1hd1+D+iZm+B8qAqlvTb69iH7x/3vOfviPjx+iwLDGXWTBUSEGeDUceEgUvv2oMFHBA+QIfr5/C1y+DQYKw==',key_name='tempest-keypair-2024620623',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bc155be8024d49b0ab4279dfca944e7d',ramdisk_id='',reservation_id='r-2b97b1ft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1105996806',owner_user_name='tempest-TestEncryptedCinderVolumes-1105996806-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:54:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f24819cdb3ee4b1f8a4a9e811a760a2c',uuid=6fc56e59-9278-4ac2-89ed-ca93f2f17d1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96788aff-c48f-4de5-a500-c62a76db51e3", "address": "fa:16:3e:46:26:da", "network": {"id": "5bb06f57-fdf3-4bab-b3b4-81f9264d8f31", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1614333098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc155be8024d49b0ab4279dfca944e7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96788aff-c4", "ovs_interfaceid": "96788aff-c48f-4de5-a500-c62a76db51e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.605 2 DEBUG nova.network.os_vif_util [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Converting VIF {"id": "96788aff-c48f-4de5-a500-c62a76db51e3", "address": "fa:16:3e:46:26:da", "network": {"id": "5bb06f57-fdf3-4bab-b3b4-81f9264d8f31", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1614333098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc155be8024d49b0ab4279dfca944e7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96788aff-c4", "ovs_interfaceid": "96788aff-c48f-4de5-a500-c62a76db51e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.605 2 DEBUG nova.network.os_vif_util [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:26:da,bridge_name='br-int',has_traffic_filtering=True,id=96788aff-c48f-4de5-a500-c62a76db51e3,network=Network(5bb06f57-fdf3-4bab-b3b4-81f9264d8f31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96788aff-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.606 2 DEBUG os_vif [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:26:da,bridge_name='br-int',has_traffic_filtering=True,id=96788aff-c48f-4de5-a500-c62a76db51e3,network=Network(5bb06f57-fdf3-4bab-b3b4-81f9264d8f31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96788aff-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.607 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.608 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.613 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96788aff-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.613 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap96788aff-c4, col_values=(('external_ids', {'iface-id': '96788aff-c48f-4de5-a500-c62a76db51e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:26:da', 'vm-uuid': '6fc56e59-9278-4ac2-89ed-ca93f2f17d1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:06 np0005480824 NetworkManager[44969]: <info>  [1760154846.6160] manager: (tap96788aff-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.623 2 INFO os_vif [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:26:da,bridge_name='br-int',has_traffic_filtering=True,id=96788aff-c48f-4de5-a500-c62a76db51e3,network=Network(5bb06f57-fdf3-4bab-b3b4-81f9264d8f31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96788aff-c4')#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.698 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.698 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.698 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] No VIF found with MAC fa:16:3e:46:26:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.699 2 INFO nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Using config drive#033[00m
Oct 10 23:54:06 np0005480824 nova_compute[260089]: 2025-10-11 03:54:06.718 2 DEBUG nova.storage.rbd_utils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] rbd image 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:54:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 196 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 1.5 MiB/s wr, 80 op/s
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:54:06 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:54:06 np0005480824 podman[286569]: 2025-10-11 03:54:06.883932644 +0000 UTC m=+0.019188382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:54:06 np0005480824 podman[286569]: 2025-10-11 03:54:06.998085263 +0000 UTC m=+0.133341021 container create fd6d0ef66a6311815e91e0d563359f6e0aeace6146464f7fa5793cc5bfee85f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 10 23:54:07 np0005480824 systemd[1]: Started libpod-conmon-fd6d0ef66a6311815e91e0d563359f6e0aeace6146464f7fa5793cc5bfee85f1.scope.
Oct 10 23:54:07 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:54:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:54:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/832447104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.191 2 INFO nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Creating config drive at /var/lib/nova/instances/6fc56e59-9278-4ac2-89ed-ca93f2f17d1d/disk.config#033[00m
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.200 2 DEBUG oslo_concurrency.processutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6fc56e59-9278-4ac2-89ed-ca93f2f17d1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ww7f2w7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:07 np0005480824 podman[286569]: 2025-10-11 03:54:07.211432222 +0000 UTC m=+0.346687990 container init fd6d0ef66a6311815e91e0d563359f6e0aeace6146464f7fa5793cc5bfee85f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 23:54:07 np0005480824 podman[286569]: 2025-10-11 03:54:07.221353426 +0000 UTC m=+0.356609154 container start fd6d0ef66a6311815e91e0d563359f6e0aeace6146464f7fa5793cc5bfee85f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:54:07 np0005480824 funny_buck[286586]: 167 167
Oct 10 23:54:07 np0005480824 systemd[1]: libpod-fd6d0ef66a6311815e91e0d563359f6e0aeace6146464f7fa5793cc5bfee85f1.scope: Deactivated successfully.
Oct 10 23:54:07 np0005480824 conmon[286586]: conmon fd6d0ef66a6311815e91 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fd6d0ef66a6311815e91e0d563359f6e0aeace6146464f7fa5793cc5bfee85f1.scope/container/memory.events
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.230 2 DEBUG nova.network.neutron [req-e196a346-1c41-4bee-9641-2888b9f18168 req-bfa8f8f1-d90e-41b6-baec-af7714716af1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Updated VIF entry in instance network info cache for port 96788aff-c48f-4de5-a500-c62a76db51e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.231 2 DEBUG nova.network.neutron [req-e196a346-1c41-4bee-9641-2888b9f18168 req-bfa8f8f1-d90e-41b6-baec-af7714716af1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Updating instance_info_cache with network_info: [{"id": "96788aff-c48f-4de5-a500-c62a76db51e3", "address": "fa:16:3e:46:26:da", "network": {"id": "5bb06f57-fdf3-4bab-b3b4-81f9264d8f31", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1614333098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc155be8024d49b0ab4279dfca944e7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96788aff-c4", "ovs_interfaceid": "96788aff-c48f-4de5-a500-c62a76db51e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.247 2 DEBUG oslo_concurrency.lockutils [req-e196a346-1c41-4bee-9641-2888b9f18168 req-bfa8f8f1-d90e-41b6-baec-af7714716af1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:54:07 np0005480824 podman[286569]: 2025-10-11 03:54:07.262062181 +0000 UTC m=+0.397317929 container attach fd6d0ef66a6311815e91e0d563359f6e0aeace6146464f7fa5793cc5bfee85f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:54:07 np0005480824 podman[286569]: 2025-10-11 03:54:07.263209308 +0000 UTC m=+0.398465036 container died fd6d0ef66a6311815e91e0d563359f6e0aeace6146464f7fa5793cc5bfee85f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.269 2 DEBUG nova.network.neutron [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Successfully updated port: fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.287 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "refresh_cache-17e293fc-58db-41da-a59c-d4a11dcbe09e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.288 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquired lock "refresh_cache-17e293fc-58db-41da-a59c-d4a11dcbe09e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.288 2 DEBUG nova.network.neutron [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.334 2 DEBUG nova.compute.manager [req-4b098973-e5be-476d-8756-3a75c97825c8 req-278d0775-50c5-49bd-9ba0-17f489cd9a38 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Received event network-changed-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.334 2 DEBUG nova.compute.manager [req-4b098973-e5be-476d-8756-3a75c97825c8 req-278d0775-50c5-49bd-9ba0-17f489cd9a38 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Refreshing instance network info cache due to event network-changed-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.334 2 DEBUG oslo_concurrency.lockutils [req-4b098973-e5be-476d-8756-3a75c97825c8 req-278d0775-50c5-49bd-9ba0-17f489cd9a38 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-17e293fc-58db-41da-a59c-d4a11dcbe09e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.338 2 DEBUG oslo_concurrency.processutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6fc56e59-9278-4ac2-89ed-ca93f2f17d1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ww7f2w7" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.394 2 DEBUG nova.storage.rbd_utils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] rbd image 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.403 2 DEBUG oslo_concurrency.processutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6fc56e59-9278-4ac2-89ed-ca93f2f17d1d/disk.config 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:07 np0005480824 systemd[1]: var-lib-containers-storage-overlay-24834e33132ab88ae29dd92cfeceefb2090d365bf5fa93d2d5951606ad5cebcb-merged.mount: Deactivated successfully.
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.440 2 DEBUG nova.network.neutron [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:54:07 np0005480824 podman[286569]: 2025-10-11 03:54:07.60444924 +0000 UTC m=+0.739704968 container remove fd6d0ef66a6311815e91e0d563359f6e0aeace6146464f7fa5793cc5bfee85f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 10 23:54:07 np0005480824 systemd[1]: libpod-conmon-fd6d0ef66a6311815e91e0d563359f6e0aeace6146464f7fa5793cc5bfee85f1.scope: Deactivated successfully.
Oct 10 23:54:07 np0005480824 podman[286650]: 2025-10-11 03:54:07.825434318 +0000 UTC m=+0.075525664 container create aecbb9c66575b0b35d59d63cc4a35dae2bf5c2ba16242724b22b8c0d5a57704e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_kilby, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:54:07 np0005480824 podman[286650]: 2025-10-11 03:54:07.777297138 +0000 UTC m=+0.027388514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:54:07 np0005480824 systemd[1]: Started libpod-conmon-aecbb9c66575b0b35d59d63cc4a35dae2bf5c2ba16242724b22b8c0d5a57704e.scope.
Oct 10 23:54:07 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:54:07 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989b7fe8e8e885b3bf7806c7d4a5d8231d0a166b98c0d055be0e253a56005658/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.977 2 DEBUG oslo_concurrency.processutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6fc56e59-9278-4ac2-89ed-ca93f2f17d1d/disk.config 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:07 np0005480824 nova_compute[260089]: 2025-10-11 03:54:07.979 2 INFO nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Deleting local config drive /var/lib/nova/instances/6fc56e59-9278-4ac2-89ed-ca93f2f17d1d/disk.config because it was imported into RBD.#033[00m
Oct 10 23:54:07 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989b7fe8e8e885b3bf7806c7d4a5d8231d0a166b98c0d055be0e253a56005658/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:54:07 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989b7fe8e8e885b3bf7806c7d4a5d8231d0a166b98c0d055be0e253a56005658/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:54:07 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989b7fe8e8e885b3bf7806c7d4a5d8231d0a166b98c0d055be0e253a56005658/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:54:07 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989b7fe8e8e885b3bf7806c7d4a5d8231d0a166b98c0d055be0e253a56005658/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:54:08 np0005480824 podman[286650]: 2025-10-11 03:54:08.019055074 +0000 UTC m=+0.269146430 container init aecbb9c66575b0b35d59d63cc4a35dae2bf5c2ba16242724b22b8c0d5a57704e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:54:08 np0005480824 podman[286650]: 2025-10-11 03:54:08.027239296 +0000 UTC m=+0.277330632 container start aecbb9c66575b0b35d59d63cc4a35dae2bf5c2ba16242724b22b8c0d5a57704e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 23:54:08 np0005480824 NetworkManager[44969]: <info>  [1760154848.0357] manager: (tap96788aff-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Oct 10 23:54:08 np0005480824 kernel: tap96788aff-c4: entered promiscuous mode
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:08 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:08Z|00146|binding|INFO|Claiming lport 96788aff-c48f-4de5-a500-c62a76db51e3 for this chassis.
Oct 10 23:54:08 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:08Z|00147|binding|INFO|96788aff-c48f-4de5-a500-c62a76db51e3: Claiming fa:16:3e:46:26:da 10.100.0.7
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.049 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:26:da 10.100.0.7'], port_security=['fa:16:3e:46:26:da 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6fc56e59-9278-4ac2-89ed-ca93f2f17d1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bc155be8024d49b0ab4279dfca944e7d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1bdb3f3-fea2-4df6-9718-0ba3c20debac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72ccff5f-b852-4556-9ac1-543256a57a7a, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=96788aff-c48f-4de5-a500-c62a76db51e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.051 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 96788aff-c48f-4de5-a500-c62a76db51e3 in datapath 5bb06f57-fdf3-4bab-b3b4-81f9264d8f31 bound to our chassis#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.053 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5bb06f57-fdf3-4bab-b3b4-81f9264d8f31#033[00m
Oct 10 23:54:08 np0005480824 podman[286650]: 2025-10-11 03:54:08.065096425 +0000 UTC m=+0.315187821 container attach aecbb9c66575b0b35d59d63cc4a35dae2bf5c2ba16242724b22b8c0d5a57704e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 10 23:54:08 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:08Z|00148|binding|INFO|Setting lport 96788aff-c48f-4de5-a500-c62a76db51e3 ovn-installed in OVS
Oct 10 23:54:08 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:08Z|00149|binding|INFO|Setting lport 96788aff-c48f-4de5-a500-c62a76db51e3 up in Southbound
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.068 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[192f809d-b938-4c1c-a94a-b9a177e5881e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.069 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5bb06f57-f1 in ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.073 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5bb06f57-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.073 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2301a23a-ee2d-4f57-a76f-4292f4910cb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.075 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8175f9-1dfa-43c3-bd31-812ba9a6e829]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 systemd-machined[215071]: New machine qemu-16-instance-00000010.
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.088 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[b8321cc6-33bc-481f-863a-53ae49f10a82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Oct 10 23:54:08 np0005480824 systemd-udevd[286688]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.112 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[53eb6179-a626-4b2e-9654-a61832e86677]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 NetworkManager[44969]: <info>  [1760154848.1245] device (tap96788aff-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:54:08 np0005480824 NetworkManager[44969]: <info>  [1760154848.1259] device (tap96788aff-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.148 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[76fd1d52-8e5d-4ed9-ba68-72b7c6b6c233]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 systemd-udevd[286691]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.167 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[90139020-db01-4e12-bd0c-70ac6d9380bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 NetworkManager[44969]: <info>  [1760154848.1723] manager: (tap5bb06f57-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/88)
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.203 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[9346ff98-8e22-4ad8-882b-87def01917fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.208 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1b464a-2c5a-46f5-823a-5eac3b6d4bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 NetworkManager[44969]: <info>  [1760154848.2346] device (tap5bb06f57-f0): carrier: link connected
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.243 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[c6444498-cb6e-4148-b89b-13d66007c7fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.263 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[35efa1ad-83e1-46c7-bfa0-343fa06b956b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb06f57-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:2f:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431589, 'reachable_time': 41789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286718, 'error': None, 'target': 'ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.279 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c025505e-d9d4-4c01-ae4e-a59193f49704]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:2f44'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431589, 'tstamp': 431589}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286719, 'error': None, 'target': 'ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.298 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5a919d-a590-4e50-9ce5-371e678a7847]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5bb06f57-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:2f:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431589, 'reachable_time': 41789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286720, 'error': None, 'target': 'ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.329 2 DEBUG nova.network.neutron [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Updating instance_info_cache with network_info: [{"id": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "address": "fa:16:3e:b7:cc:f7", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbeb2d21-6b", "ovs_interfaceid": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.331 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d38bd5-77c0-485a-b1fd-893b7c9a7b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.345 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Releasing lock "refresh_cache-17e293fc-58db-41da-a59c-d4a11dcbe09e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.345 2 DEBUG nova.compute.manager [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Instance network_info: |[{"id": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "address": "fa:16:3e:b7:cc:f7", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbeb2d21-6b", "ovs_interfaceid": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.345 2 DEBUG oslo_concurrency.lockutils [req-4b098973-e5be-476d-8756-3a75c97825c8 req-278d0775-50c5-49bd-9ba0-17f489cd9a38 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-17e293fc-58db-41da-a59c-d4a11dcbe09e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.346 2 DEBUG nova.network.neutron [req-4b098973-e5be-476d-8756-3a75c97825c8 req-278d0775-50c5-49bd-9ba0-17f489cd9a38 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Refreshing network info cache for port fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.391 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[f2de7137-b1f6-4f20-a973-6a8cc4bb2dc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.393 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb06f57-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.393 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.394 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bb06f57-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Oct 10 23:54:08 np0005480824 NetworkManager[44969]: <info>  [1760154848.4309] manager: (tap5bb06f57-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Oct 10 23:54:08 np0005480824 kernel: tap5bb06f57-f0: entered promiscuous mode
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.434 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5bb06f57-f0, col_values=(('external_ids', {'iface-id': 'ddcf40ae-e805-480e-86dd-b5cb03efd086'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:08 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:08Z|00150|binding|INFO|Releasing lport ddcf40ae-e805-480e-86dd-b5cb03efd086 from this chassis (sb_readonly=0)
Oct 10 23:54:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Oct 10 23:54:08 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.454 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5bb06f57-fdf3-4bab-b3b4-81f9264d8f31.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5bb06f57-fdf3-4bab-b3b4-81f9264d8f31.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.455 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7f31bf-55d5-4a41-b02a-1982b49e518e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.458 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/5bb06f57-fdf3-4bab-b3b4-81f9264d8f31.pid.haproxy
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 5bb06f57-fdf3-4bab-b3b4-81f9264d8f31
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:54:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:08.460 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31', 'env', 'PROCESS_TAG=haproxy-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5bb06f57-fdf3-4bab-b3b4-81f9264d8f31.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:54:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 214 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 2.7 MiB/s wr, 120 op/s
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.887 2 DEBUG os_brick.utils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.888 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.901 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.901 676 DEBUG oslo.privsep.daemon [-] privsep: reply[684c40b8-376a-418c-a4f6-17bc6b01546a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.903 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.910 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.911 676 DEBUG oslo.privsep.daemon [-] privsep: reply[52d0355f-7459-4a4e-ba9a-bfc19d8ccf93]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 podman[286797]: 2025-10-11 03:54:08.816672251 +0000 UTC m=+0.024377484 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.912 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.921 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.921 676 DEBUG oslo.privsep.daemon [-] privsep: reply[dd41084b-1bcf-4b64-afc0-1bb84e5a0bf2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.922 676 DEBUG oslo.privsep.daemon [-] privsep: reply[5d2495cf-e22d-4d47-a4f0-e8f0f2e814bf]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.923 2 DEBUG oslo_concurrency.processutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.951 2 DEBUG oslo_concurrency.processutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.954 2 DEBUG os_brick.initiator.connectors.lightos [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.955 2 DEBUG os_brick.initiator.connectors.lightos [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.955 2 DEBUG os_brick.initiator.connectors.lightos [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.955 2 DEBUG os_brick.utils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:54:08 np0005480824 nova_compute[260089]: 2025-10-11 03:54:08.955 2 DEBUG nova.virt.block_device [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Updating existing volume attachment record: ac2dac37-c8e2-4bd4-8b43-26f23c6c3506 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:54:08 np0005480824 podman[286797]: 2025-10-11 03:54:08.957400674 +0000 UTC m=+0.165105917 container create 0b14503ea8c99d19f59456ae720e14a91f5502258bc84033fe3065733d8d0290 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.065 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154849.0639384, 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.066 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] VM Started (Lifecycle Event)#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.092 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:54:09 np0005480824 systemd[1]: Started libpod-conmon-0b14503ea8c99d19f59456ae720e14a91f5502258bc84033fe3065733d8d0290.scope.
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.097 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154849.064101, 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.098 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.119 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.124 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:54:09 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:54:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b44eea4f7bff4e1f1dbd44e51c9fd598ea50608d581f0332d444852df9eb3fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.158 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:54:09 np0005480824 thirsty_kilby[286667]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:54:09 np0005480824 thirsty_kilby[286667]: --> relative data size: 1.0
Oct 10 23:54:09 np0005480824 thirsty_kilby[286667]: --> All data devices are unavailable
Oct 10 23:54:09 np0005480824 systemd[1]: libpod-aecbb9c66575b0b35d59d63cc4a35dae2bf5c2ba16242724b22b8c0d5a57704e.scope: Deactivated successfully.
Oct 10 23:54:09 np0005480824 systemd[1]: libpod-aecbb9c66575b0b35d59d63cc4a35dae2bf5c2ba16242724b22b8c0d5a57704e.scope: Consumed 1.055s CPU time.
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.402 2 DEBUG nova.compute.manager [req-f53cf726-2577-472e-9727-9dab8faa164c req-57eb8df4-bacc-4a2a-9cc0-a8ee392472cc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Received event network-vif-plugged-96788aff-c48f-4de5-a500-c62a76db51e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.404 2 DEBUG oslo_concurrency.lockutils [req-f53cf726-2577-472e-9727-9dab8faa164c req-57eb8df4-bacc-4a2a-9cc0-a8ee392472cc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.404 2 DEBUG oslo_concurrency.lockutils [req-f53cf726-2577-472e-9727-9dab8faa164c req-57eb8df4-bacc-4a2a-9cc0-a8ee392472cc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.405 2 DEBUG oslo_concurrency.lockutils [req-f53cf726-2577-472e-9727-9dab8faa164c req-57eb8df4-bacc-4a2a-9cc0-a8ee392472cc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.405 2 DEBUG nova.compute.manager [req-f53cf726-2577-472e-9727-9dab8faa164c req-57eb8df4-bacc-4a2a-9cc0-a8ee392472cc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Processing event network-vif-plugged-96788aff-c48f-4de5-a500-c62a76db51e3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.406 2 DEBUG nova.compute.manager [req-f53cf726-2577-472e-9727-9dab8faa164c req-57eb8df4-bacc-4a2a-9cc0-a8ee392472cc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Received event network-vif-plugged-96788aff-c48f-4de5-a500-c62a76db51e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.406 2 DEBUG oslo_concurrency.lockutils [req-f53cf726-2577-472e-9727-9dab8faa164c req-57eb8df4-bacc-4a2a-9cc0-a8ee392472cc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.406 2 DEBUG oslo_concurrency.lockutils [req-f53cf726-2577-472e-9727-9dab8faa164c req-57eb8df4-bacc-4a2a-9cc0-a8ee392472cc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.407 2 DEBUG oslo_concurrency.lockutils [req-f53cf726-2577-472e-9727-9dab8faa164c req-57eb8df4-bacc-4a2a-9cc0-a8ee392472cc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.407 2 DEBUG nova.compute.manager [req-f53cf726-2577-472e-9727-9dab8faa164c req-57eb8df4-bacc-4a2a-9cc0-a8ee392472cc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] No waiting events found dispatching network-vif-plugged-96788aff-c48f-4de5-a500-c62a76db51e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.407 2 WARNING nova.compute.manager [req-f53cf726-2577-472e-9727-9dab8faa164c req-57eb8df4-bacc-4a2a-9cc0-a8ee392472cc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Received unexpected event network-vif-plugged-96788aff-c48f-4de5-a500-c62a76db51e3 for instance with vm_state building and task_state spawning.#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.408 2 DEBUG nova.compute.manager [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.418 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154849.4160628, 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.418 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.420 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.424 2 INFO nova.virt.libvirt.driver [-] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Instance spawned successfully.#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.424 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.440 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.446 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.450 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.451 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.452 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.452 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.453 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.454 2 DEBUG nova.virt.libvirt.driver [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.486 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:54:09 np0005480824 podman[286797]: 2025-10-11 03:54:09.50068444 +0000 UTC m=+0.708389753 container init 0b14503ea8c99d19f59456ae720e14a91f5502258bc84033fe3065733d8d0290 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009)
Oct 10 23:54:09 np0005480824 podman[286797]: 2025-10-11 03:54:09.514988286 +0000 UTC m=+0.722693539 container start 0b14503ea8c99d19f59456ae720e14a91f5502258bc84033fe3065733d8d0290 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.531 2 INFO nova.compute.manager [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Took 8.63 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.532 2 DEBUG nova.compute.manager [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:54:09 np0005480824 neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31[286835]: [NOTICE]   (286854) : New worker (286856) forked
Oct 10 23:54:09 np0005480824 neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31[286835]: [NOTICE]   (286854) : Loading success.
Oct 10 23:54:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Oct 10 23:54:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:54:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1031839750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.611 2 INFO nova.compute.manager [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Took 9.51 seconds to build instance.#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.629 2 DEBUG oslo_concurrency.lockutils [None req-dfdcf318-5877-4a57-afdb-1abb185d0682 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:09 np0005480824 podman[286650]: 2025-10-11 03:54:09.82563892 +0000 UTC m=+2.075730326 container died aecbb9c66575b0b35d59d63cc4a35dae2bf5c2ba16242724b22b8c0d5a57704e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_kilby, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.838 2 DEBUG nova.compute.manager [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.840 2 DEBUG nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.841 2 INFO nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Creating image(s)#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.842 2 DEBUG nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.842 2 DEBUG nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Ensure instance console log exists: /var/lib/nova/instances/17e293fc-58db-41da-a59c-d4a11dcbe09e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.843 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.843 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.844 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.848 2 DEBUG nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Start _get_guest_xml network_info=[{"id": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "address": "fa:16:3e:b7:cc:f7", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbeb2d21-6b", "ovs_interfaceid": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-10-11T03:53:55Z,direct_url=<?>,disk_format='qcow2',id=2b07f57b-601d-45a2-951f-e059c29ac235,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-1547643425',owner='55d21391a321476eb133317b3402b0f0',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-10-11T03:53:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'ac2dac37-c8e2-4bd4-8b43-26f23c6c3506', 'mount_device': '/dev/vda', 'delete_on_termination': True, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-cde42382-994a-48b2-919e-146ff619a3ac', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'cde42382-994a-48b2-919e-146ff619a3ac', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '17e293fc-58db-41da-a59c-d4a11dcbe09e', 'attached_at': '', 'detached_at': '', 'volume_id': 'cde42382-994a-48b2-919e-146ff619a3ac', 'serial': 'cde42382-994a-48b2-919e-146ff619a3ac'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.851 2 DEBUG nova.network.neutron [req-4b098973-e5be-476d-8756-3a75c97825c8 req-278d0775-50c5-49bd-9ba0-17f489cd9a38 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Updated VIF entry in instance network info cache for port fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.852 2 DEBUG nova.network.neutron [req-4b098973-e5be-476d-8756-3a75c97825c8 req-278d0775-50c5-49bd-9ba0-17f489cd9a38 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Updating instance_info_cache with network_info: [{"id": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "address": "fa:16:3e:b7:cc:f7", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbeb2d21-6b", "ovs_interfaceid": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.859 2 WARNING nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.867 2 DEBUG nova.virt.libvirt.host [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.868 2 DEBUG nova.virt.libvirt.host [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.872 2 DEBUG oslo_concurrency.lockutils [req-4b098973-e5be-476d-8756-3a75c97825c8 req-278d0775-50c5-49bd-9ba0-17f489cd9a38 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-17e293fc-58db-41da-a59c-d4a11dcbe09e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.875 2 DEBUG nova.virt.libvirt.host [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.876 2 DEBUG nova.virt.libvirt.host [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.877 2 DEBUG nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.877 2 DEBUG nova.virt.hardware [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-10-11T03:53:55Z,direct_url=<?>,disk_format='qcow2',id=2b07f57b-601d-45a2-951f-e059c29ac235,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-1547643425',owner='55d21391a321476eb133317b3402b0f0',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-10-11T03:53:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.878 2 DEBUG nova.virt.hardware [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.878 2 DEBUG nova.virt.hardware [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.879 2 DEBUG nova.virt.hardware [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.879 2 DEBUG nova.virt.hardware [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.879 2 DEBUG nova.virt.hardware [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.880 2 DEBUG nova.virt.hardware [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.880 2 DEBUG nova.virt.hardware [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.880 2 DEBUG nova.virt.hardware [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.881 2 DEBUG nova.virt.hardware [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:54:09 np0005480824 nova_compute[260089]: 2025-10-11 03:54:09.881 2 DEBUG nova.virt.hardware [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.281 2 DEBUG nova.storage.rbd_utils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 17e293fc-58db-41da-a59c-d4a11dcbe09e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.288 2 DEBUG oslo_concurrency.processutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Oct 10 23:54:10 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Oct 10 23:54:10 np0005480824 systemd[1]: var-lib-containers-storage-overlay-989b7fe8e8e885b3bf7806c7d4a5d8231d0a166b98c0d055be0e253a56005658-merged.mount: Deactivated successfully.
Oct 10 23:54:10 np0005480824 podman[286650]: 2025-10-11 03:54:10.410583012 +0000 UTC m=+2.660674348 container remove aecbb9c66575b0b35d59d63cc4a35dae2bf5c2ba16242724b22b8c0d5a57704e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_kilby, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:54:10 np0005480824 systemd[1]: libpod-conmon-aecbb9c66575b0b35d59d63cc4a35dae2bf5c2ba16242724b22b8c0d5a57704e.scope: Deactivated successfully.
Oct 10 23:54:10 np0005480824 podman[286885]: 2025-10-11 03:54:10.456181104 +0000 UTC m=+0.077133312 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:54:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:10.499 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:10.499 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:10.500 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:54:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/209437912' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:54:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 214 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 2.3 MiB/s wr, 69 op/s
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.751 2 DEBUG oslo_concurrency.processutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.794 2 DEBUG nova.virt.libvirt.vif [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:54:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1495777670',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1495777670',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1495777670',id=17,image_ref='2b07f57b-601d-45a2-951f-e059c29ac235',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEeexnNbf0Ewcnzlfahch8vpVt8DV1s4pp3AdYQt5o1rEf/nKrI59oii/zZgIaNBaqG0YVDxIG4syyDNXbiktWJ3d9SJAD8rQ4rJCogh2BkvtrOJho1QQo72Hv0U+zAqeg==',key_name='tempest-keypair-330787313',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-n4j56ht1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-739984652',image_owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:54:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='38ebc503771e417aaf1f3aea0c835994',uuid=17e293fc-58db-41da-a59c-d4a11dcbe09e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "address": "fa:16:3e:b7:cc:f7", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbeb2d21-6b", "ovs_interfaceid": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.795 2 DEBUG nova.network.os_vif_util [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "address": "fa:16:3e:b7:cc:f7", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbeb2d21-6b", "ovs_interfaceid": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.796 2 DEBUG nova.network.os_vif_util [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbeb2d21-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.797 2 DEBUG nova.objects.instance [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 17e293fc-58db-41da-a59c-d4a11dcbe09e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.810 2 DEBUG nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  <uuid>17e293fc-58db-41da-a59c-d4a11dcbe09e</uuid>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  <name>instance-00000011</name>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-1495777670</nova:name>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:54:09</nova:creationTime>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:        <nova:user uuid="38ebc503771e417aaf1f3aea0c835994">tempest-TestVolumeBootPattern-739984652-project-member</nova:user>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:        <nova:project uuid="55d21391a321476eb133317b3402b0f0">tempest-TestVolumeBootPattern-739984652</nova:project>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="2b07f57b-601d-45a2-951f-e059c29ac235"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:        <nova:port uuid="fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <entry name="serial">17e293fc-58db-41da-a59c-d4a11dcbe09e</entry>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <entry name="uuid">17e293fc-58db-41da-a59c-d4a11dcbe09e</entry>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/17e293fc-58db-41da-a59c-d4a11dcbe09e_disk.config">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-cde42382-994a-48b2-919e-146ff619a3ac">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <serial>cde42382-994a-48b2-919e-146ff619a3ac</serial>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:b7:cc:f7"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <target dev="tapfbeb2d21-6b"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/17e293fc-58db-41da-a59c-d4a11dcbe09e/console.log" append="off"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <input type="keyboard" bus="usb"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:54:10 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:54:10 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:54:10 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:54:10 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.811 2 DEBUG nova.compute.manager [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Preparing to wait for external event network-vif-plugged-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.811 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.811 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.811 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.812 2 DEBUG nova.virt.libvirt.vif [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:54:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1495777670',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1495777670',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1495777670',id=17,image_ref='2b07f57b-601d-45a2-951f-e059c29ac235',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEeexnNbf0Ewcnzlfahch8vpVt8DV1s4pp3AdYQt5o1rEf/nKrI59oii/zZgIaNBaqG0YVDxIG4syyDNXbiktWJ3d9SJAD8rQ4rJCogh2BkvtrOJho1QQo72Hv0U+zAqeg==',key_name='tempest-keypair-330787313',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-n4j56ht1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-739984652',image_owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:54:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='38ebc503771e417aaf1f3aea0c835994',uuid=17e293fc-58db-41da-a59c-d4a11dcbe09e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "address": "fa:16:3e:b7:cc:f7", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbeb2d21-6b", "ovs_interfaceid": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.812 2 DEBUG nova.network.os_vif_util [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "address": "fa:16:3e:b7:cc:f7", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbeb2d21-6b", "ovs_interfaceid": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.813 2 DEBUG nova.network.os_vif_util [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbeb2d21-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.813 2 DEBUG os_vif [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbeb2d21-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.815 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.815 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbeb2d21-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:10 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.818 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfbeb2d21-6b, col_values=(('external_ids', {'iface-id': 'fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:cc:f7', 'vm-uuid': '17e293fc-58db-41da-a59c-d4a11dcbe09e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:10 np0005480824 NetworkManager[44969]: <info>  [1760154850.8558] manager: (tapfbeb2d21-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:10 np0005480824 nova_compute[260089]: 2025-10-11 03:54:10.866 2 INFO os_vif [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbeb2d21-6b')#033[00m
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.043 2 DEBUG nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.043 2 DEBUG nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.044 2 DEBUG nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No VIF found with MAC fa:16:3e:b7:cc:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.044 2 INFO nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Using config drive#033[00m
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.073 2 DEBUG nova.storage.rbd_utils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 17e293fc-58db-41da-a59c-d4a11dcbe09e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:54:11 np0005480824 podman[287087]: 2025-10-11 03:54:11.160099681 +0000 UTC m=+0.025536322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:54:11 np0005480824 podman[287087]: 2025-10-11 03:54:11.360333182 +0000 UTC m=+0.225769783 container create df67e1b655e0bba8882b35148e80d1f34678a01d2f4dc67ef1feec344a66a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.406 2 INFO nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Creating config drive at /var/lib/nova/instances/17e293fc-58db-41da-a59c-d4a11dcbe09e/disk.config#033[00m
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.418 2 DEBUG oslo_concurrency.processutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/17e293fc-58db-41da-a59c-d4a11dcbe09e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp70sbdtph execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:11 np0005480824 systemd[1]: Started libpod-conmon-df67e1b655e0bba8882b35148e80d1f34678a01d2f4dc67ef1feec344a66a1c7.scope.
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.534 2 DEBUG nova.compute.manager [req-f8c87018-28ba-4461-bc20-57fe73cf1001 req-3559bb3c-22fd-4dbc-baaf-711bf75d020c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Received event network-changed-96788aff-c48f-4de5-a500-c62a76db51e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.535 2 DEBUG nova.compute.manager [req-f8c87018-28ba-4461-bc20-57fe73cf1001 req-3559bb3c-22fd-4dbc-baaf-711bf75d020c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Refreshing instance network info cache due to event network-changed-96788aff-c48f-4de5-a500-c62a76db51e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.535 2 DEBUG oslo_concurrency.lockutils [req-f8c87018-28ba-4461-bc20-57fe73cf1001 req-3559bb3c-22fd-4dbc-baaf-711bf75d020c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.535 2 DEBUG oslo_concurrency.lockutils [req-f8c87018-28ba-4461-bc20-57fe73cf1001 req-3559bb3c-22fd-4dbc-baaf-711bf75d020c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.535 2 DEBUG nova.network.neutron [req-f8c87018-28ba-4461-bc20-57fe73cf1001 req-3559bb3c-22fd-4dbc-baaf-711bf75d020c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Refreshing network info cache for port 96788aff-c48f-4de5-a500-c62a76db51e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:54:11 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.579 2 DEBUG oslo_concurrency.processutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/17e293fc-58db-41da-a59c-d4a11dcbe09e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp70sbdtph" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.620 2 DEBUG nova.storage.rbd_utils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 17e293fc-58db-41da-a59c-d4a11dcbe09e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:54:11 np0005480824 nova_compute[260089]: 2025-10-11 03:54:11.627 2 DEBUG oslo_concurrency.processutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/17e293fc-58db-41da-a59c-d4a11dcbe09e/disk.config 17e293fc-58db-41da-a59c-d4a11dcbe09e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:54:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2196305604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:54:11 np0005480824 podman[287087]: 2025-10-11 03:54:11.829881796 +0000 UTC m=+0.695318447 container init df67e1b655e0bba8882b35148e80d1f34678a01d2f4dc67ef1feec344a66a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:54:11 np0005480824 podman[287087]: 2025-10-11 03:54:11.840087775 +0000 UTC m=+0.705524416 container start df67e1b655e0bba8882b35148e80d1f34678a01d2f4dc67ef1feec344a66a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:54:11 np0005480824 nervous_lichterman[287106]: 167 167
Oct 10 23:54:11 np0005480824 systemd[1]: libpod-df67e1b655e0bba8882b35148e80d1f34678a01d2f4dc67ef1feec344a66a1c7.scope: Deactivated successfully.
Oct 10 23:54:11 np0005480824 podman[287087]: 2025-10-11 03:54:11.886978096 +0000 UTC m=+0.752414797 container attach df67e1b655e0bba8882b35148e80d1f34678a01d2f4dc67ef1feec344a66a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:54:11 np0005480824 podman[287087]: 2025-10-11 03:54:11.887541869 +0000 UTC m=+0.752978510 container died df67e1b655e0bba8882b35148e80d1f34678a01d2f4dc67ef1feec344a66a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 10 23:54:12 np0005480824 systemd[1]: var-lib-containers-storage-overlay-b80f92a9d219a17050a66b60e7a406b06d06033b0eb2f6aa0b768be0ccd952e1-merged.mount: Deactivated successfully.
Oct 10 23:54:12 np0005480824 podman[287087]: 2025-10-11 03:54:12.330641753 +0000 UTC m=+1.196078354 container remove df67e1b655e0bba8882b35148e80d1f34678a01d2f4dc67ef1feec344a66a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:54:12 np0005480824 systemd[1]: libpod-conmon-df67e1b655e0bba8882b35148e80d1f34678a01d2f4dc67ef1feec344a66a1c7.scope: Deactivated successfully.
Oct 10 23:54:12 np0005480824 podman[287168]: 2025-10-11 03:54:12.543436309 +0000 UTC m=+0.032640088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:54:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 214 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.3 MiB/s wr, 211 op/s
Oct 10 23:54:12 np0005480824 podman[287168]: 2025-10-11 03:54:12.819048229 +0000 UTC m=+0.308251998 container create 4acbb066da96e762037ec8b7deed8ae77bf273bbb7159e71050209777fa798f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct 10 23:54:12 np0005480824 nova_compute[260089]: 2025-10-11 03:54:12.865 2 DEBUG oslo_concurrency.processutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/17e293fc-58db-41da-a59c-d4a11dcbe09e/disk.config 17e293fc-58db-41da-a59c-d4a11dcbe09e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:12 np0005480824 nova_compute[260089]: 2025-10-11 03:54:12.868 2 INFO nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Deleting local config drive /var/lib/nova/instances/17e293fc-58db-41da-a59c-d4a11dcbe09e/disk.config because it was imported into RBD.#033[00m
Oct 10 23:54:12 np0005480824 systemd[1]: Started libpod-conmon-4acbb066da96e762037ec8b7deed8ae77bf273bbb7159e71050209777fa798f2.scope.
Oct 10 23:54:12 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:54:12 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfc1ed68730af66f558fb62ede28e6104e28d05f40b078ef0de6c9e30c23f19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:54:12 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfc1ed68730af66f558fb62ede28e6104e28d05f40b078ef0de6c9e30c23f19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:54:12 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfc1ed68730af66f558fb62ede28e6104e28d05f40b078ef0de6c9e30c23f19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:54:12 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfc1ed68730af66f558fb62ede28e6104e28d05f40b078ef0de6c9e30c23f19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:54:12 np0005480824 kernel: tapfbeb2d21-6b: entered promiscuous mode
Oct 10 23:54:12 np0005480824 NetworkManager[44969]: <info>  [1760154852.9521] manager: (tapfbeb2d21-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Oct 10 23:54:12 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:12Z|00151|binding|INFO|Claiming lport fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 for this chassis.
Oct 10 23:54:12 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:12Z|00152|binding|INFO|fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7: Claiming fa:16:3e:b7:cc:f7 10.100.0.14
Oct 10 23:54:12 np0005480824 nova_compute[260089]: 2025-10-11 03:54:12.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:12 np0005480824 podman[287168]: 2025-10-11 03:54:12.963119122 +0000 UTC m=+0.452322921 container init 4acbb066da96e762037ec8b7deed8ae77bf273bbb7159e71050209777fa798f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 10 23:54:12 np0005480824 nova_compute[260089]: 2025-10-11 03:54:12.966 2 DEBUG nova.network.neutron [req-f8c87018-28ba-4461-bc20-57fe73cf1001 req-3559bb3c-22fd-4dbc-baaf-711bf75d020c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Updated VIF entry in instance network info cache for port 96788aff-c48f-4de5-a500-c62a76db51e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:54:12 np0005480824 nova_compute[260089]: 2025-10-11 03:54:12.967 2 DEBUG nova.network.neutron [req-f8c87018-28ba-4461-bc20-57fe73cf1001 req-3559bb3c-22fd-4dbc-baaf-711bf75d020c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Updating instance_info_cache with network_info: [{"id": "96788aff-c48f-4de5-a500-c62a76db51e3", "address": "fa:16:3e:46:26:da", "network": {"id": "5bb06f57-fdf3-4bab-b3b4-81f9264d8f31", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1614333098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc155be8024d49b0ab4279dfca944e7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96788aff-c4", "ovs_interfaceid": "96788aff-c48f-4de5-a500-c62a76db51e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:54:12 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:12Z|00153|binding|INFO|Setting lport fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 ovn-installed in OVS
Oct 10 23:54:12 np0005480824 podman[287168]: 2025-10-11 03:54:12.975237557 +0000 UTC m=+0.464441316 container start 4acbb066da96e762037ec8b7deed8ae77bf273bbb7159e71050209777fa798f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 10 23:54:12 np0005480824 nova_compute[260089]: 2025-10-11 03:54:12.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:12 np0005480824 podman[287168]: 2025-10-11 03:54:12.979404324 +0000 UTC m=+0.468608133 container attach 4acbb066da96e762037ec8b7deed8ae77bf273bbb7159e71050209777fa798f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 10 23:54:12 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:12Z|00154|binding|INFO|Setting lport fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 up in Southbound
Oct 10 23:54:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:12.981 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:cc:f7 10.100.0.14'], port_security=['fa:16:3e:b7:cc:f7 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '17e293fc-58db-41da-a59c-d4a11dcbe09e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d21391a321476eb133317b3402b0f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f789c11b-b6c2-4d7b-80ca-8c28d7662d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d98e64fb-092d-4777-b741-426f3e849bc3, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:54:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:12.982 162245 INFO neutron.agent.ovn.metadata.agent [-] Port fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 in datapath 359720eb-a957-4bcd-b9b2-3cf7dad947e4 bound to our chassis#033[00m
Oct 10 23:54:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:12.986 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 359720eb-a957-4bcd-b9b2-3cf7dad947e4#033[00m
Oct 10 23:54:12 np0005480824 systemd-machined[215071]: New machine qemu-17-instance-00000011.
Oct 10 23:54:13 np0005480824 nova_compute[260089]: 2025-10-11 03:54:12.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:13 np0005480824 nova_compute[260089]: 2025-10-11 03:54:13.001 2 DEBUG oslo_concurrency.lockutils [req-f8c87018-28ba-4461-bc20-57fe73cf1001 req-3559bb3c-22fd-4dbc-baaf-711bf75d020c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:54:13 np0005480824 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Oct 10 23:54:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Oct 10 23:54:13 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:13.009 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e84fccd6-2566-49c7-9af6-2554579e7e8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Oct 10 23:54:13 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Oct 10 23:54:13 np0005480824 systemd-udevd[287205]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:54:13 np0005480824 NetworkManager[44969]: <info>  [1760154853.0584] device (tapfbeb2d21-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:54:13 np0005480824 NetworkManager[44969]: <info>  [1760154853.0596] device (tapfbeb2d21-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:54:13 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:13.061 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[433c5343-5eca-49a6-9564-026715abee9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:13 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:13.067 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a67f89-1dbd-486a-876d-7febf2e2dcc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:13 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:13.108 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[bed0641a-c5a0-47d8-89fc-60c2d35ddfc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:13 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:13.133 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[df546655-7227-47ec-bc34-dd1f8da15e87]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap359720eb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:90:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428062, 'reachable_time': 19294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287217, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:13 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:13.166 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c79d4c6a-ebee-42bf-b577-1d03bf491d9e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap359720eb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428077, 'tstamp': 428077}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287218, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap359720eb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428081, 'tstamp': 428081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287218, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:13 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:13.168 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359720eb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:13 np0005480824 nova_compute[260089]: 2025-10-11 03:54:13.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:13 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:13.204 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap359720eb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:13 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:13.204 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:54:13 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:13.205 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap359720eb-a0, col_values=(('external_ids', {'iface-id': '039c7668-0b85-4466-9c66-62531405028d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:13 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:13.205 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:54:13 np0005480824 nova_compute[260089]: 2025-10-11 03:54:13.467 2 DEBUG nova.compute.manager [req-47667139-f154-46ab-9f8b-e4004ee3ecee req-88ae14bb-5f71-4bcc-bd51-f64498dded32 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Received event network-vif-plugged-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:13 np0005480824 nova_compute[260089]: 2025-10-11 03:54:13.468 2 DEBUG oslo_concurrency.lockutils [req-47667139-f154-46ab-9f8b-e4004ee3ecee req-88ae14bb-5f71-4bcc-bd51-f64498dded32 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:13 np0005480824 nova_compute[260089]: 2025-10-11 03:54:13.468 2 DEBUG oslo_concurrency.lockutils [req-47667139-f154-46ab-9f8b-e4004ee3ecee req-88ae14bb-5f71-4bcc-bd51-f64498dded32 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:13 np0005480824 nova_compute[260089]: 2025-10-11 03:54:13.468 2 DEBUG oslo_concurrency.lockutils [req-47667139-f154-46ab-9f8b-e4004ee3ecee req-88ae14bb-5f71-4bcc-bd51-f64498dded32 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:13 np0005480824 nova_compute[260089]: 2025-10-11 03:54:13.469 2 DEBUG nova.compute.manager [req-47667139-f154-46ab-9f8b-e4004ee3ecee req-88ae14bb-5f71-4bcc-bd51-f64498dded32 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Processing event network-vif-plugged-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]: {
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:    "0": [
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:        {
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "devices": [
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "/dev/loop3"
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            ],
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_name": "ceph_lv0",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_size": "21470642176",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "name": "ceph_lv0",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "tags": {
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.cluster_name": "ceph",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.crush_device_class": "",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.encrypted": "0",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.osd_id": "0",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.type": "block",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.vdo": "0"
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            },
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "type": "block",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "vg_name": "ceph_vg0"
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:        }
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:    ],
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:    "1": [
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:        {
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "devices": [
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "/dev/loop4"
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            ],
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_name": "ceph_lv1",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_size": "21470642176",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "name": "ceph_lv1",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "tags": {
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.cluster_name": "ceph",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.crush_device_class": "",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.encrypted": "0",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.osd_id": "1",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.type": "block",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.vdo": "0"
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            },
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "type": "block",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "vg_name": "ceph_vg1"
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:        }
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:    ],
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:    "2": [
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:        {
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "devices": [
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "/dev/loop5"
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            ],
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_name": "ceph_lv2",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_size": "21470642176",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "name": "ceph_lv2",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "tags": {
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.cluster_name": "ceph",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.crush_device_class": "",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.encrypted": "0",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.osd_id": "2",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.type": "block",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:                "ceph.vdo": "0"
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            },
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "type": "block",
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:            "vg_name": "ceph_vg2"
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:        }
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]:    ]
Oct 10 23:54:13 np0005480824 compassionate_wozniak[287188]: }
Oct 10 23:54:13 np0005480824 systemd[1]: libpod-4acbb066da96e762037ec8b7deed8ae77bf273bbb7159e71050209777fa798f2.scope: Deactivated successfully.
Oct 10 23:54:13 np0005480824 podman[287168]: 2025-10-11 03:54:13.844997997 +0000 UTC m=+1.334201756 container died 4acbb066da96e762037ec8b7deed8ae77bf273bbb7159e71050209777fa798f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 10 23:54:13 np0005480824 nova_compute[260089]: 2025-10-11 03:54:13.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:13 np0005480824 systemd[1]: var-lib-containers-storage-overlay-4bfc1ed68730af66f558fb62ede28e6104e28d05f40b078ef0de6c9e30c23f19-merged.mount: Deactivated successfully.
Oct 10 23:54:13 np0005480824 podman[287168]: 2025-10-11 03:54:13.914237985 +0000 UTC m=+1.403441744 container remove 4acbb066da96e762037ec8b7deed8ae77bf273bbb7159e71050209777fa798f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:54:13 np0005480824 systemd[1]: libpod-conmon-4acbb066da96e762037ec8b7deed8ae77bf273bbb7159e71050209777fa798f2.scope: Deactivated successfully.
Oct 10 23:54:13 np0005480824 nova_compute[260089]: 2025-10-11 03:54:13.995 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154853.9947314, 17e293fc-58db-41da-a59c-d4a11dcbe09e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:54:13 np0005480824 nova_compute[260089]: 2025-10-11 03:54:13.995 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] VM Started (Lifecycle Event)#033[00m
Oct 10 23:54:13 np0005480824 nova_compute[260089]: 2025-10-11 03:54:13.998 2 DEBUG nova.compute.manager [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.005 2 DEBUG nova.virt.libvirt.driver [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.010 2 INFO nova.virt.libvirt.driver [-] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Instance spawned successfully.#033[00m
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.010 2 INFO nova.compute.manager [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Took 4.17 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.011 2 DEBUG nova.compute.manager [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:54:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.017 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.023 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:54:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Oct 10 23:54:14 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.042 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.043 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154853.9983068, 17e293fc-58db-41da-a59c-d4a11dcbe09e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.043 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.071 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.086 2 INFO nova.compute.manager [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Took 10.23 seconds to build instance.#033[00m
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.089 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154854.0009382, 17e293fc-58db-41da-a59c-d4a11dcbe09e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.089 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.116 2 DEBUG oslo_concurrency.lockutils [None req-31c2ee16-37c4-4ccc-af73-6b0204241081 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.117 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:54:14 np0005480824 nova_compute[260089]: 2025-10-11 03:54:14.125 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:54:14 np0005480824 podman[287422]: 2025-10-11 03:54:14.725083666 +0000 UTC m=+0.052776249 container create 8ff7370b53f8d76a173095740b72aa7349efef7c2a102ccb9af91e92930f077c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:54:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 214 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 189 op/s
Oct 10 23:54:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:54:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Oct 10 23:54:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Oct 10 23:54:14 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Oct 10 23:54:14 np0005480824 systemd[1]: Started libpod-conmon-8ff7370b53f8d76a173095740b72aa7349efef7c2a102ccb9af91e92930f077c.scope.
Oct 10 23:54:14 np0005480824 podman[287422]: 2025-10-11 03:54:14.700353671 +0000 UTC m=+0.028046304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:54:14 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:54:14 np0005480824 podman[287422]: 2025-10-11 03:54:14.845653527 +0000 UTC m=+0.173346150 container init 8ff7370b53f8d76a173095740b72aa7349efef7c2a102ccb9af91e92930f077c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 23:54:14 np0005480824 podman[287422]: 2025-10-11 03:54:14.85679428 +0000 UTC m=+0.184486863 container start 8ff7370b53f8d76a173095740b72aa7349efef7c2a102ccb9af91e92930f077c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_villani, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:54:14 np0005480824 podman[287422]: 2025-10-11 03:54:14.86018641 +0000 UTC m=+0.187879043 container attach 8ff7370b53f8d76a173095740b72aa7349efef7c2a102ccb9af91e92930f077c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_villani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:54:14 np0005480824 affectionate_villani[287438]: 167 167
Oct 10 23:54:14 np0005480824 systemd[1]: libpod-8ff7370b53f8d76a173095740b72aa7349efef7c2a102ccb9af91e92930f077c.scope: Deactivated successfully.
Oct 10 23:54:14 np0005480824 podman[287422]: 2025-10-11 03:54:14.865440175 +0000 UTC m=+0.193132758 container died 8ff7370b53f8d76a173095740b72aa7349efef7c2a102ccb9af91e92930f077c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_villani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 10 23:54:14 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0f09519e16bb7e8cd25b3b7bacd77943e417153571085e6e4477bc12776879ca-merged.mount: Deactivated successfully.
Oct 10 23:54:14 np0005480824 podman[287422]: 2025-10-11 03:54:14.910037059 +0000 UTC m=+0.237729642 container remove 8ff7370b53f8d76a173095740b72aa7349efef7c2a102ccb9af91e92930f077c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_villani, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 10 23:54:14 np0005480824 systemd[1]: libpod-conmon-8ff7370b53f8d76a173095740b72aa7349efef7c2a102ccb9af91e92930f077c.scope: Deactivated successfully.
Oct 10 23:54:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:54:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3774906182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:54:15 np0005480824 podman[287462]: 2025-10-11 03:54:15.113236443 +0000 UTC m=+0.043706294 container create c89860681f9afe3dec0cd2e451b9ded3365a7500e5654b045558e67e347f3baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 23:54:15 np0005480824 systemd[1]: Started libpod-conmon-c89860681f9afe3dec0cd2e451b9ded3365a7500e5654b045558e67e347f3baa.scope.
Oct 10 23:54:15 np0005480824 podman[287462]: 2025-10-11 03:54:15.092949044 +0000 UTC m=+0.023418915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:54:15 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:54:15 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf3d5df8681aaca65396b4a594e40f94915c5ed4c9b48d83dcb86800bcc45f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:54:15 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf3d5df8681aaca65396b4a594e40f94915c5ed4c9b48d83dcb86800bcc45f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:54:15 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf3d5df8681aaca65396b4a594e40f94915c5ed4c9b48d83dcb86800bcc45f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:54:15 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf3d5df8681aaca65396b4a594e40f94915c5ed4c9b48d83dcb86800bcc45f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:54:15 np0005480824 podman[287462]: 2025-10-11 03:54:15.214043087 +0000 UTC m=+0.144512958 container init c89860681f9afe3dec0cd2e451b9ded3365a7500e5654b045558e67e347f3baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mahavira, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:54:15 np0005480824 podman[287462]: 2025-10-11 03:54:15.221551274 +0000 UTC m=+0.152021125 container start c89860681f9afe3dec0cd2e451b9ded3365a7500e5654b045558e67e347f3baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mahavira, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 10 23:54:15 np0005480824 podman[287462]: 2025-10-11 03:54:15.224813742 +0000 UTC m=+0.155283593 container attach c89860681f9afe3dec0cd2e451b9ded3365a7500e5654b045558e67e347f3baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mahavira, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 10 23:54:15 np0005480824 nova_compute[260089]: 2025-10-11 03:54:15.556 2 DEBUG nova.compute.manager [req-3fbdc56e-bb14-47ea-a3bf-57fa9340edd0 req-e284057e-bda6-492d-8de4-149b5379ad6e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Received event network-vif-plugged-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:15 np0005480824 nova_compute[260089]: 2025-10-11 03:54:15.557 2 DEBUG oslo_concurrency.lockutils [req-3fbdc56e-bb14-47ea-a3bf-57fa9340edd0 req-e284057e-bda6-492d-8de4-149b5379ad6e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:15 np0005480824 nova_compute[260089]: 2025-10-11 03:54:15.557 2 DEBUG oslo_concurrency.lockutils [req-3fbdc56e-bb14-47ea-a3bf-57fa9340edd0 req-e284057e-bda6-492d-8de4-149b5379ad6e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:15 np0005480824 nova_compute[260089]: 2025-10-11 03:54:15.557 2 DEBUG oslo_concurrency.lockutils [req-3fbdc56e-bb14-47ea-a3bf-57fa9340edd0 req-e284057e-bda6-492d-8de4-149b5379ad6e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:15 np0005480824 nova_compute[260089]: 2025-10-11 03:54:15.558 2 DEBUG nova.compute.manager [req-3fbdc56e-bb14-47ea-a3bf-57fa9340edd0 req-e284057e-bda6-492d-8de4-149b5379ad6e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] No waiting events found dispatching network-vif-plugged-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:54:15 np0005480824 nova_compute[260089]: 2025-10-11 03:54:15.558 2 WARNING nova.compute.manager [req-3fbdc56e-bb14-47ea-a3bf-57fa9340edd0 req-e284057e-bda6-492d-8de4-149b5379ad6e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Received unexpected event network-vif-plugged-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:54:15 np0005480824 nova_compute[260089]: 2025-10-11 03:54:15.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Oct 10 23:54:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Oct 10 23:54:16 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]: {
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "osd_id": 0,
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "type": "bluestore"
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:    },
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "osd_id": 1,
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "type": "bluestore"
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:    },
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "osd_id": 2,
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:        "type": "bluestore"
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]:    }
Oct 10 23:54:16 np0005480824 frosty_mahavira[287479]: }
Oct 10 23:54:16 np0005480824 systemd[1]: libpod-c89860681f9afe3dec0cd2e451b9ded3365a7500e5654b045558e67e347f3baa.scope: Deactivated successfully.
Oct 10 23:54:16 np0005480824 podman[287462]: 2025-10-11 03:54:16.225648256 +0000 UTC m=+1.156118127 container died c89860681f9afe3dec0cd2e451b9ded3365a7500e5654b045558e67e347f3baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:54:16 np0005480824 systemd[1]: var-lib-containers-storage-overlay-bcf3d5df8681aaca65396b4a594e40f94915c5ed4c9b48d83dcb86800bcc45f3-merged.mount: Deactivated successfully.
Oct 10 23:54:16 np0005480824 podman[287462]: 2025-10-11 03:54:16.294069232 +0000 UTC m=+1.224539083 container remove c89860681f9afe3dec0cd2e451b9ded3365a7500e5654b045558e67e347f3baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:54:16 np0005480824 systemd[1]: libpod-conmon-c89860681f9afe3dec0cd2e451b9ded3365a7500e5654b045558e67e347f3baa.scope: Deactivated successfully.
Oct 10 23:54:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:54:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:54:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:54:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:54:16 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 4e160629-4ae3-482e-8a3b-8ee3944a948c does not exist
Oct 10 23:54:16 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 1ad72c32-f33f-44bb-b75d-df6957cad846 does not exist
Oct 10 23:54:16 np0005480824 nova_compute[260089]: 2025-10-11 03:54:16.515 2 DEBUG nova.compute.manager [req-5add6908-418c-4fc8-bd9d-7fea76dd45dd req-9aec0c49-6a94-4060-bd71-dbb3ba413a4c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Received event network-changed-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:16 np0005480824 nova_compute[260089]: 2025-10-11 03:54:16.516 2 DEBUG nova.compute.manager [req-5add6908-418c-4fc8-bd9d-7fea76dd45dd req-9aec0c49-6a94-4060-bd71-dbb3ba413a4c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Refreshing instance network info cache due to event network-changed-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:54:16 np0005480824 nova_compute[260089]: 2025-10-11 03:54:16.516 2 DEBUG oslo_concurrency.lockutils [req-5add6908-418c-4fc8-bd9d-7fea76dd45dd req-9aec0c49-6a94-4060-bd71-dbb3ba413a4c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-17e293fc-58db-41da-a59c-d4a11dcbe09e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:54:16 np0005480824 nova_compute[260089]: 2025-10-11 03:54:16.516 2 DEBUG oslo_concurrency.lockutils [req-5add6908-418c-4fc8-bd9d-7fea76dd45dd req-9aec0c49-6a94-4060-bd71-dbb3ba413a4c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-17e293fc-58db-41da-a59c-d4a11dcbe09e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:54:16 np0005480824 nova_compute[260089]: 2025-10-11 03:54:16.516 2 DEBUG nova.network.neutron [req-5add6908-418c-4fc8-bd9d-7fea76dd45dd req-9aec0c49-6a94-4060-bd71-dbb3ba413a4c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Refreshing network info cache for port fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:54:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 214 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 43 KiB/s wr, 76 op/s
Oct 10 23:54:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Oct 10 23:54:17 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:54:17 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:54:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Oct 10 23:54:17 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Oct 10 23:54:17 np0005480824 nova_compute[260089]: 2025-10-11 03:54:17.559 2 DEBUG nova.network.neutron [req-5add6908-418c-4fc8-bd9d-7fea76dd45dd req-9aec0c49-6a94-4060-bd71-dbb3ba413a4c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Updated VIF entry in instance network info cache for port fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:54:17 np0005480824 nova_compute[260089]: 2025-10-11 03:54:17.559 2 DEBUG nova.network.neutron [req-5add6908-418c-4fc8-bd9d-7fea76dd45dd req-9aec0c49-6a94-4060-bd71-dbb3ba413a4c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Updating instance_info_cache with network_info: [{"id": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "address": "fa:16:3e:b7:cc:f7", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbeb2d21-6b", "ovs_interfaceid": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:54:17 np0005480824 nova_compute[260089]: 2025-10-11 03:54:17.576 2 DEBUG oslo_concurrency.lockutils [req-5add6908-418c-4fc8-bd9d-7fea76dd45dd req-9aec0c49-6a94-4060-bd71-dbb3ba413a4c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-17e293fc-58db-41da-a59c-d4a11dcbe09e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:54:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:54:18 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1386877183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:54:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 214 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 40 KiB/s wr, 266 op/s
Oct 10 23:54:18 np0005480824 nova_compute[260089]: 2025-10-11 03:54:18.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Oct 10 23:54:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Oct 10 23:54:19 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Oct 10 23:54:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:54:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Oct 10 23:54:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Oct 10 23:54:20 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Oct 10 23:54:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 214 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 40 KiB/s wr, 267 op/s
Oct 10 23:54:20 np0005480824 nova_compute[260089]: 2025-10-11 03:54:20.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Oct 10 23:54:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Oct 10 23:54:22 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Oct 10 23:54:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 229 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.0 MiB/s wr, 280 op/s
Oct 10 23:54:23 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:23Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:26:da 10.100.0.7
Oct 10 23:54:23 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:23Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:26:da 10.100.0.7
Oct 10 23:54:23 np0005480824 nova_compute[260089]: 2025-10-11 03:54:23.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Oct 10 23:54:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:54:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1480637166' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:54:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:54:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1480637166' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:54:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Oct 10 23:54:24 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Oct 10 23:54:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 229 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 3.0 MiB/s wr, 113 op/s
Oct 10 23:54:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:54:25 np0005480824 nova_compute[260089]: 2025-10-11 03:54:25.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:26 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:26Z|00028|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.14
Oct 10 23:54:26 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:26Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b7:cc:f7 10.100.0.14
Oct 10 23:54:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Oct 10 23:54:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Oct 10 23:54:26 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Oct 10 23:54:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 249 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.7 MiB/s wr, 207 op/s
Oct 10 23:54:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:54:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:54:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:54:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:54:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:54:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:54:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:54:27
Oct 10 23:54:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:54:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:54:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.mgr', 'images', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data']
Oct 10 23:54:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:54:28 np0005480824 podman[287577]: 2025-10-11 03:54:28.075580391 +0000 UTC m=+0.113748271 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 10 23:54:28 np0005480824 podman[287576]: 2025-10-11 03:54:28.098718447 +0000 UTC m=+0.137358739 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Oct 10 23:54:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:54:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:54:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:54:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:54:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:54:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:54:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:54:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:54:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:54:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:54:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 261 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 241 op/s
Oct 10 23:54:28 np0005480824 nova_compute[260089]: 2025-10-11 03:54:28.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Oct 10 23:54:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Oct 10 23:54:29 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Oct 10 23:54:29 np0005480824 nova_compute[260089]: 2025-10-11 03:54:29.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:54:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:54:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Oct 10 23:54:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Oct 10 23:54:29 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Oct 10 23:54:30 np0005480824 nova_compute[260089]: 2025-10-11 03:54:30.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:54:30 np0005480824 nova_compute[260089]: 2025-10-11 03:54:30.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:54:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 261 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 249 op/s
Oct 10 23:54:30 np0005480824 nova_compute[260089]: 2025-10-11 03:54:30.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:30 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:30Z|00030|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.14
Oct 10 23:54:30 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:30Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b7:cc:f7 10.100.0.14
Oct 10 23:54:31 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:31Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:cc:f7 10.100.0.14
Oct 10 23:54:31 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:31Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:cc:f7 10.100.0.14
Oct 10 23:54:31 np0005480824 nova_compute[260089]: 2025-10-11 03:54:31.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:54:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Oct 10 23:54:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Oct 10 23:54:32 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Oct 10 23:54:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 264 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 664 KiB/s wr, 211 op/s
Oct 10 23:54:33 np0005480824 nova_compute[260089]: 2025-10-11 03:54:33.306 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:54:33 np0005480824 nova_compute[260089]: 2025-10-11 03:54:33.307 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:54:33 np0005480824 nova_compute[260089]: 2025-10-11 03:54:33.308 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:54:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Oct 10 23:54:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Oct 10 23:54:33 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Oct 10 23:54:33 np0005480824 nova_compute[260089]: 2025-10-11 03:54:33.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:34 np0005480824 nova_compute[260089]: 2025-10-11 03:54:34.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:54:34 np0005480824 nova_compute[260089]: 2025-10-11 03:54:34.298 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:54:34 np0005480824 nova_compute[260089]: 2025-10-11 03:54:34.298 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:54:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Oct 10 23:54:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Oct 10 23:54:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Oct 10 23:54:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 264 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 147 KiB/s wr, 75 op/s
Oct 10 23:54:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:54:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Oct 10 23:54:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Oct 10 23:54:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Oct 10 23:54:35 np0005480824 podman[287614]: 2025-10-11 03:54:35.065717547 +0000 UTC m=+0.118455181 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3)
Oct 10 23:54:35 np0005480824 nova_compute[260089]: 2025-10-11 03:54:35.204 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "refresh_cache-ba9c01f8-cb0e-4564-879e-fb3102e2e76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:54:35 np0005480824 nova_compute[260089]: 2025-10-11 03:54:35.204 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquired lock "refresh_cache-ba9c01f8-cb0e-4564-879e-fb3102e2e76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:54:35 np0005480824 nova_compute[260089]: 2025-10-11 03:54:35.205 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 10 23:54:35 np0005480824 nova_compute[260089]: 2025-10-11 03:54:35.205 2 DEBUG nova.objects.instance [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ba9c01f8-cb0e-4564-879e-fb3102e2e76a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:54:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:54:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1879420516' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:54:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:54:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1879420516' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:54:35 np0005480824 nova_compute[260089]: 2025-10-11 03:54:35.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Oct 10 23:54:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Oct 10 23:54:36 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Oct 10 23:54:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 264 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 32 op/s
Oct 10 23:54:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:54:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1480144066' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:54:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:54:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1480144066' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:54:37 np0005480824 nova_compute[260089]: 2025-10-11 03:54:37.557 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Updating instance_info_cache with network_info: [{"id": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "address": "fa:16:3e:2e:c5:07", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c1f566-62", "ovs_interfaceid": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:54:37 np0005480824 nova_compute[260089]: 2025-10-11 03:54:37.573 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Releasing lock "refresh_cache-ba9c01f8-cb0e-4564-879e-fb3102e2e76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:54:37 np0005480824 nova_compute[260089]: 2025-10-11 03:54:37.573 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 10 23:54:37 np0005480824 nova_compute[260089]: 2025-10-11 03:54:37.575 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:54:37 np0005480824 nova_compute[260089]: 2025-10-11 03:54:37.575 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:54:37 np0005480824 nova_compute[260089]: 2025-10-11 03:54:37.599 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:37 np0005480824 nova_compute[260089]: 2025-10-11 03:54:37.600 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:37 np0005480824 nova_compute[260089]: 2025-10-11 03:54:37.601 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:37 np0005480824 nova_compute[260089]: 2025-10-11 03:54:37.601 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:54:37 np0005480824 nova_compute[260089]: 2025-10-11 03:54:37.602 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007651233964967116 of space, bias 1.0, pg target 0.2295370189490135 quantized to 32 (current 32)
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0012095588534544507 of space, bias 1.0, pg target 0.3628676560363352 quantized to 32 (current 32)
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660490737123136 of space, bias 1.0, pg target 0.19981472211369408 quantized to 32 (current 32)
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:54:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:54:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/949193636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.077 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.166 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.167 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.172 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.173 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.177 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.178 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.395 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.397 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3873MB free_disk=59.9423828125GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.397 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.397 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.582 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance ba9c01f8-cb0e-4564-879e-fb3102e2e76a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.583 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.583 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 17e293fc-58db-41da-a59c-d4a11dcbe09e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.583 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.583 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:54:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Oct 10 23:54:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Oct 10 23:54:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Oct 10 23:54:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 264 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 15 KiB/s wr, 96 op/s
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.750 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:38 np0005480824 nova_compute[260089]: 2025-10-11 03:54:38.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:54:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779630654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:54:39 np0005480824 nova_compute[260089]: 2025-10-11 03:54:39.194 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:39 np0005480824 nova_compute[260089]: 2025-10-11 03:54:39.202 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:54:39 np0005480824 nova_compute[260089]: 2025-10-11 03:54:39.219 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:54:39 np0005480824 nova_compute[260089]: 2025-10-11 03:54:39.238 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:54:39 np0005480824 nova_compute[260089]: 2025-10-11 03:54:39.238 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:54:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Oct 10 23:54:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Oct 10 23:54:39 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Oct 10 23:54:39 np0005480824 nova_compute[260089]: 2025-10-11 03:54:39.961 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:54:40 np0005480824 nova_compute[260089]: 2025-10-11 03:54:40.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:54:40 np0005480824 nova_compute[260089]: 2025-10-11 03:54:40.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 10 23:54:40 np0005480824 nova_compute[260089]: 2025-10-11 03:54:40.309 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 10 23:54:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 264 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 11 KiB/s wr, 71 op/s
Oct 10 23:54:40 np0005480824 nova_compute[260089]: 2025-10-11 03:54:40.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:54:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3645344731' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:54:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:54:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3645344731' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:54:41 np0005480824 podman[287685]: 2025-10-11 03:54:41.036307417 +0000 UTC m=+0.086056216 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 10 23:54:42 np0005480824 nova_compute[260089]: 2025-10-11 03:54:42.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:54:42 np0005480824 nova_compute[260089]: 2025-10-11 03:54:42.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 10 23:54:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 264 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 12 KiB/s wr, 125 op/s
Oct 10 23:54:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:43.407 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:54:43 np0005480824 nova_compute[260089]: 2025-10-11 03:54:43.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:43.409 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:54:43 np0005480824 nova_compute[260089]: 2025-10-11 03:54:43.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 264 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.4 KiB/s wr, 79 op/s
Oct 10 23:54:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:54:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Oct 10 23:54:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Oct 10 23:54:44 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.021 2 DEBUG oslo_concurrency.lockutils [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Acquiring lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.022 2 DEBUG oslo_concurrency.lockutils [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.039 2 DEBUG nova.objects.instance [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lazy-loading 'flavor' on Instance uuid 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.079 2 DEBUG oslo_concurrency.lockutils [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.233 2 DEBUG oslo_concurrency.lockutils [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Acquiring lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.234 2 DEBUG oslo_concurrency.lockutils [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.234 2 INFO nova.compute.manager [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Attaching volume f90d54d3-906d-4a9c-ab76-f4fb002378ba to /dev/vdb#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.533 2 DEBUG os_brick.utils [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.535 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.552 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.553 676 DEBUG oslo.privsep.daemon [-] privsep: reply[71ba356d-74e1-41da-8fc2-47e0d8fb5dd4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.555 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.567 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.568 676 DEBUG oslo.privsep.daemon [-] privsep: reply[839f3c33-112b-4ace-ae1b-9b78ee340f97]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.571 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.584 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.584 676 DEBUG oslo.privsep.daemon [-] privsep: reply[b9929cc7-b812-4957-896f-61e7fcb1ba25]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.586 676 DEBUG oslo.privsep.daemon [-] privsep: reply[dce3985c-ef82-4527-ab35-7a41cda02ae9]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.587 2 DEBUG oslo_concurrency.processutils [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.621 2 DEBUG oslo_concurrency.processutils [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.625 2 DEBUG os_brick.initiator.connectors.lightos [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.626 2 DEBUG os_brick.initiator.connectors.lightos [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.627 2 DEBUG os_brick.initiator.connectors.lightos [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.627 2 DEBUG os_brick.utils [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] <== get_connector_properties: return (93ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.628 2 DEBUG nova.virt.block_device [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Updating existing volume attachment record: 7d1cdc55-2cc1-46f5-b192-1a9be770f789 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:54:45 np0005480824 nova_compute[260089]: 2025-10-11 03:54:45.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:46 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:46Z|00155|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 10 23:54:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:54:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1114996084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:54:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 264 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 17 KiB/s wr, 46 op/s
Oct 10 23:54:46 np0005480824 nova_compute[260089]: 2025-10-11 03:54:46.804 2 DEBUG os_brick.encryptors [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Using volume encryption metadata '{'encryption_key_id': 'c80ef54e-e36d-4556-816f-4a018866f104', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f90d54d3-906d-4a9c-ab76-f4fb002378ba', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f90d54d3-906d-4a9c-ab76-f4fb002378ba', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '6fc56e59-9278-4ac2-89ed-ca93f2f17d1d', 'attached_at': '', 'detached_at': '', 'volume_id': 'f90d54d3-906d-4a9c-ab76-f4fb002378ba', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct 10 23:54:46 np0005480824 nova_compute[260089]: 2025-10-11 03:54:46.811 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct 10 23:54:46 np0005480824 nova_compute[260089]: 2025-10-11 03:54:46.827 2 DEBUG barbicanclient.v1.secrets [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/c80ef54e-e36d-4556-816f-4a018866f104 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct 10 23:54:46 np0005480824 nova_compute[260089]: 2025-10-11 03:54:46.827 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:46 np0005480824 nova_compute[260089]: 2025-10-11 03:54:46.851 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:46 np0005480824 nova_compute[260089]: 2025-10-11 03:54:46.851 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:46 np0005480824 nova_compute[260089]: 2025-10-11 03:54:46.900 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:46 np0005480824 nova_compute[260089]: 2025-10-11 03:54:46.900 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:46 np0005480824 nova_compute[260089]: 2025-10-11 03:54:46.928 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:46 np0005480824 nova_compute[260089]: 2025-10-11 03:54:46.928 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:46 np0005480824 nova_compute[260089]: 2025-10-11 03:54:46.970 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:46 np0005480824 nova_compute[260089]: 2025-10-11 03:54:46.971 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:46 np0005480824 nova_compute[260089]: 2025-10-11 03:54:46.990 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:46 np0005480824 nova_compute[260089]: 2025-10-11 03:54:46.990 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.021 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.022 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.040 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.040 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.067 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.068 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.108 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.109 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.135 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.136 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.163 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.163 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.186 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.188 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.212 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.213 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.236 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.237 2 INFO barbicanclient.base [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Calculated Secrets uuid ref: secrets/c80ef54e-e36d-4556-816f-4a018866f104#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.268 2 DEBUG barbicanclient.client [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.269 2 DEBUG nova.virt.libvirt.host [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 10 23:54:47 np0005480824 nova_compute[260089]:  <usage type="volume">
Oct 10 23:54:47 np0005480824 nova_compute[260089]:    <volume>f90d54d3-906d-4a9c-ab76-f4fb002378ba</volume>
Oct 10 23:54:47 np0005480824 nova_compute[260089]:  </usage>
Oct 10 23:54:47 np0005480824 nova_compute[260089]: </secret>
Oct 10 23:54:47 np0005480824 nova_compute[260089]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.283 2 DEBUG nova.objects.instance [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lazy-loading 'flavor' on Instance uuid 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.315 2 DEBUG nova.virt.libvirt.driver [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Attempting to attach volume f90d54d3-906d-4a9c-ab76-f4fb002378ba with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct 10 23:54:47 np0005480824 nova_compute[260089]: 2025-10-11 03:54:47.319 2 DEBUG nova.virt.libvirt.guest [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] attach device xml: <disk type="network" device="disk">
Oct 10 23:54:47 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:54:47 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-f90d54d3-906d-4a9c-ab76-f4fb002378ba">
Oct 10 23:54:47 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:54:47 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:54:47 np0005480824 nova_compute[260089]:  <auth username="openstack">
Oct 10 23:54:47 np0005480824 nova_compute[260089]:    <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:54:47 np0005480824 nova_compute[260089]:  </auth>
Oct 10 23:54:47 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:54:47 np0005480824 nova_compute[260089]:  <serial>f90d54d3-906d-4a9c-ab76-f4fb002378ba</serial>
Oct 10 23:54:47 np0005480824 nova_compute[260089]:  <encryption format="luks">
Oct 10 23:54:47 np0005480824 nova_compute[260089]:    <secret type="passphrase" uuid="5f0387b9-44f3-483a-8648-c2ee92f38a13"/>
Oct 10 23:54:47 np0005480824 nova_compute[260089]:  </encryption>
Oct 10 23:54:47 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:54:47 np0005480824 nova_compute[260089]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 10 23:54:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 268 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 281 KiB/s rd, 267 KiB/s wr, 45 op/s
Oct 10 23:54:48 np0005480824 nova_compute[260089]: 2025-10-11 03:54:48.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.245 2 DEBUG oslo_concurrency.lockutils [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "17e293fc-58db-41da-a59c-d4a11dcbe09e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.245 2 DEBUG oslo_concurrency.lockutils [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.246 2 DEBUG oslo_concurrency.lockutils [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.246 2 DEBUG oslo_concurrency.lockutils [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.246 2 DEBUG oslo_concurrency.lockutils [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.249 2 INFO nova.compute.manager [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Terminating instance#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.250 2 DEBUG nova.compute.manager [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:54:49 np0005480824 kernel: tapfbeb2d21-6b (unregistering): left promiscuous mode
Oct 10 23:54:49 np0005480824 NetworkManager[44969]: <info>  [1760154889.3088] device (tapfbeb2d21-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:54:49 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:49Z|00156|binding|INFO|Releasing lport fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 from this chassis (sb_readonly=0)
Oct 10 23:54:49 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:49Z|00157|binding|INFO|Setting lport fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 down in Southbound
Oct 10 23:54:49 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:49Z|00158|binding|INFO|Removing iface tapfbeb2d21-6b ovn-installed in OVS
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:49 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:49.332 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:cc:f7 10.100.0.14'], port_security=['fa:16:3e:b7:cc:f7 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '17e293fc-58db-41da-a59c-d4a11dcbe09e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d21391a321476eb133317b3402b0f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f789c11b-b6c2-4d7b-80ca-8c28d7662d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d98e64fb-092d-4777-b741-426f3e849bc3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:54:49 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:49.333 162245 INFO neutron.agent.ovn.metadata.agent [-] Port fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 in datapath 359720eb-a957-4bcd-b9b2-3cf7dad947e4 unbound from our chassis#033[00m
Oct 10 23:54:49 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:49.334 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 359720eb-a957-4bcd-b9b2-3cf7dad947e4#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:49 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:49.353 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8e9637-1c42-4de3-9fd7-7e77f72f0669]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:49 np0005480824 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Oct 10 23:54:49 np0005480824 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 15.074s CPU time.
Oct 10 23:54:49 np0005480824 systemd-machined[215071]: Machine qemu-17-instance-00000011 terminated.
Oct 10 23:54:49 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:49.381 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[a37902d0-2219-4b77-87d5-30e3aecb164c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:49 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:49.384 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[f151c82b-2d7c-47f0-b4fd-5b3e3c1ff646]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:49 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:49.407 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d2d77b-5ceb-48dc-91ea-09e1a5cd66a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:49 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:49.425 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[394ba658-f0dd-4d25-aefa-0c92f39392a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap359720eb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:90:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428062, 'reachable_time': 19294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287742, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:49 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:49.438 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7308ef9c-6dd8-4487-af48-75c9b9445652]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap359720eb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428077, 'tstamp': 428077}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287743, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap359720eb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428081, 'tstamp': 428081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287743, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:49 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:49.439 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359720eb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:49 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:49.445 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap359720eb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:49 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:49.446 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:54:49 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:49.446 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap359720eb-a0, col_values=(('external_ids', {'iface-id': '039c7668-0b85-4466-9c66-62531405028d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:49 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:49.447 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.492 2 INFO nova.virt.libvirt.driver [-] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Instance destroyed successfully.#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.493 2 DEBUG nova.objects.instance [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lazy-loading 'resources' on Instance uuid 17e293fc-58db-41da-a59c-d4a11dcbe09e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.507 2 DEBUG nova.virt.libvirt.vif [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:54:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1495777670',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1495777670',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1495777670',id=17,image_ref='2b07f57b-601d-45a2-951f-e059c29ac235',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEeexnNbf0Ewcnzlfahch8vpVt8DV1s4pp3AdYQt5o1rEf/nKrI59oii/zZgIaNBaqG0YVDxIG4syyDNXbiktWJ3d9SJAD8rQ4rJCogh2BkvtrOJho1QQo72Hv0U+zAqeg==',key_name='tempest-keypair-330787313',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:54:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-n4j56ht1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-739984652',image_owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:54:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='38ebc503771e417aaf1f3aea0c835994',uuid=17e293fc-58db-41da-a59c-d4a11dcbe09e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "address": "fa:16:3e:b7:cc:f7", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbeb2d21-6b", "ovs_interfaceid": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.508 2 DEBUG nova.network.os_vif_util [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "address": "fa:16:3e:b7:cc:f7", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbeb2d21-6b", "ovs_interfaceid": "fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.509 2 DEBUG nova.network.os_vif_util [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbeb2d21-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.510 2 DEBUG os_vif [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbeb2d21-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbeb2d21-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.526 2 INFO os_vif [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbeb2d21-6b')#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.551 2 DEBUG nova.compute.manager [req-817cb798-dc39-4e1e-893f-c870d9e9ce4b req-20cb2f18-aea2-4360-b3ad-a9727b916b4e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Received event network-vif-unplugged-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.552 2 DEBUG oslo_concurrency.lockutils [req-817cb798-dc39-4e1e-893f-c870d9e9ce4b req-20cb2f18-aea2-4360-b3ad-a9727b916b4e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.553 2 DEBUG oslo_concurrency.lockutils [req-817cb798-dc39-4e1e-893f-c870d9e9ce4b req-20cb2f18-aea2-4360-b3ad-a9727b916b4e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.553 2 DEBUG oslo_concurrency.lockutils [req-817cb798-dc39-4e1e-893f-c870d9e9ce4b req-20cb2f18-aea2-4360-b3ad-a9727b916b4e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.554 2 DEBUG nova.compute.manager [req-817cb798-dc39-4e1e-893f-c870d9e9ce4b req-20cb2f18-aea2-4360-b3ad-a9727b916b4e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] No waiting events found dispatching network-vif-unplugged-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.554 2 DEBUG nova.compute.manager [req-817cb798-dc39-4e1e-893f-c870d9e9ce4b req-20cb2f18-aea2-4360-b3ad-a9727b916b4e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Received event network-vif-unplugged-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.754 2 INFO nova.virt.libvirt.driver [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Deleting instance files /var/lib/nova/instances/17e293fc-58db-41da-a59c-d4a11dcbe09e_del#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.755 2 INFO nova.virt.libvirt.driver [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Deletion of /var/lib/nova/instances/17e293fc-58db-41da-a59c-d4a11dcbe09e_del complete#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.822 2 INFO nova.compute.manager [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.823 2 DEBUG oslo.service.loopingcall [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.823 2 DEBUG nova.compute.manager [-] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:54:49 np0005480824 nova_compute[260089]: 2025-10-11 03:54:49.824 2 DEBUG nova.network.neutron [-] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:54:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:54:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Oct 10 23:54:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Oct 10 23:54:49 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Oct 10 23:54:50 np0005480824 nova_compute[260089]: 2025-10-11 03:54:50.039 2 DEBUG nova.virt.libvirt.driver [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:54:50 np0005480824 nova_compute[260089]: 2025-10-11 03:54:50.039 2 DEBUG nova.virt.libvirt.driver [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:54:50 np0005480824 nova_compute[260089]: 2025-10-11 03:54:50.040 2 DEBUG nova.virt.libvirt.driver [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:54:50 np0005480824 nova_compute[260089]: 2025-10-11 03:54:50.040 2 DEBUG nova.virt.libvirt.driver [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] No VIF found with MAC fa:16:3e:46:26:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:54:50 np0005480824 nova_compute[260089]: 2025-10-11 03:54:50.260 2 DEBUG oslo_concurrency.lockutils [None req-33d1bbe2-f242-4f7a-991d-b470741dc97d f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 268 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 279 KiB/s rd, 295 KiB/s wr, 8 op/s
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.466 2 DEBUG nova.network.neutron [-] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.483 2 INFO nova.compute.manager [-] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Took 1.66 seconds to deallocate network for instance.#033[00m
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.652 2 DEBUG nova.compute.manager [req-81f0dd62-683d-49cc-bbca-5f8f393a808f req-61880161-e32e-4a8e-8d55-68559456d8fb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Received event network-vif-plugged-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.653 2 DEBUG oslo_concurrency.lockutils [req-81f0dd62-683d-49cc-bbca-5f8f393a808f req-61880161-e32e-4a8e-8d55-68559456d8fb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.654 2 DEBUG oslo_concurrency.lockutils [req-81f0dd62-683d-49cc-bbca-5f8f393a808f req-61880161-e32e-4a8e-8d55-68559456d8fb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.655 2 DEBUG oslo_concurrency.lockutils [req-81f0dd62-683d-49cc-bbca-5f8f393a808f req-61880161-e32e-4a8e-8d55-68559456d8fb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.655 2 DEBUG nova.compute.manager [req-81f0dd62-683d-49cc-bbca-5f8f393a808f req-61880161-e32e-4a8e-8d55-68559456d8fb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] No waiting events found dispatching network-vif-plugged-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.656 2 WARNING nova.compute.manager [req-81f0dd62-683d-49cc-bbca-5f8f393a808f req-61880161-e32e-4a8e-8d55-68559456d8fb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Received unexpected event network-vif-plugged-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 for instance with vm_state active and task_state deleting.#033[00m
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.657 2 DEBUG nova.compute.manager [req-81f0dd62-683d-49cc-bbca-5f8f393a808f req-61880161-e32e-4a8e-8d55-68559456d8fb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Received event network-vif-deleted-fbeb2d21-6b02-4a09-88b3-ceb28c9b76c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.693 2 DEBUG oslo_concurrency.lockutils [None req-39d9a1db-a2f0-4305-89f9-38db7abb16e0 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Acquiring lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.694 2 DEBUG oslo_concurrency.lockutils [None req-39d9a1db-a2f0-4305-89f9-38db7abb16e0 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.710 2 INFO nova.compute.manager [None req-39d9a1db-a2f0-4305-89f9-38db7abb16e0 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Detaching volume f90d54d3-906d-4a9c-ab76-f4fb002378ba#033[00m
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.785 2 INFO nova.compute.manager [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Took 0.30 seconds to detach 1 volumes for instance.#033[00m
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.787 2 DEBUG nova.compute.manager [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Deleting volume: cde42382-994a-48b2-919e-146ff619a3ac _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct 10 23:54:51 np0005480824 nova_compute[260089]: 2025-10-11 03:54:51.866 2 INFO nova.virt.block_device [None req-39d9a1db-a2f0-4305-89f9-38db7abb16e0 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Attempting to driver detach volume f90d54d3-906d-4a9c-ab76-f4fb002378ba from mountpoint /dev/vdb#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.003 2 DEBUG oslo_concurrency.lockutils [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.004 2 DEBUG oslo_concurrency.lockutils [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.026 2 DEBUG os_brick.encryptors [None req-39d9a1db-a2f0-4305-89f9-38db7abb16e0 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Using volume encryption metadata '{'encryption_key_id': 'c80ef54e-e36d-4556-816f-4a018866f104', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f90d54d3-906d-4a9c-ab76-f4fb002378ba', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f90d54d3-906d-4a9c-ab76-f4fb002378ba', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '6fc56e59-9278-4ac2-89ed-ca93f2f17d1d', 'attached_at': '', 'detached_at': '', 'volume_id': 'f90d54d3-906d-4a9c-ab76-f4fb002378ba', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.035 2 DEBUG nova.virt.libvirt.driver [None req-39d9a1db-a2f0-4305-89f9-38db7abb16e0 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Attempting to detach device vdb from instance 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.036 2 DEBUG nova.virt.libvirt.guest [None req-39d9a1db-a2f0-4305-89f9-38db7abb16e0 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-f90d54d3-906d-4a9c-ab76-f4fb002378ba">
Oct 10 23:54:52 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  <serial>f90d54d3-906d-4a9c-ab76-f4fb002378ba</serial>
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  <encryption format="luks">
Oct 10 23:54:52 np0005480824 nova_compute[260089]:    <secret type="passphrase" uuid="5f0387b9-44f3-483a-8648-c2ee92f38a13"/>
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  </encryption>
Oct 10 23:54:52 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:54:52 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.051 2 INFO nova.virt.libvirt.driver [None req-39d9a1db-a2f0-4305-89f9-38db7abb16e0 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Successfully detached device vdb from instance 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d from the persistent domain config.#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.052 2 DEBUG nova.virt.libvirt.driver [None req-39d9a1db-a2f0-4305-89f9-38db7abb16e0 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.053 2 DEBUG nova.virt.libvirt.guest [None req-39d9a1db-a2f0-4305-89f9-38db7abb16e0 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] detach device xml: <disk type="network" device="disk">
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-f90d54d3-906d-4a9c-ab76-f4fb002378ba">
Oct 10 23:54:52 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  </source>
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  <serial>f90d54d3-906d-4a9c-ab76-f4fb002378ba</serial>
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  <encryption format="luks">
Oct 10 23:54:52 np0005480824 nova_compute[260089]:    <secret type="passphrase" uuid="5f0387b9-44f3-483a-8648-c2ee92f38a13"/>
Oct 10 23:54:52 np0005480824 nova_compute[260089]:  </encryption>
Oct 10 23:54:52 np0005480824 nova_compute[260089]: </disk>
Oct 10 23:54:52 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.111 2 DEBUG oslo_concurrency.processutils [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.169 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Received event <DeviceRemovedEvent: 1760154892.1679547, 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.173 2 DEBUG nova.virt.libvirt.driver [None req-39d9a1db-a2f0-4305-89f9-38db7abb16e0 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.180 2 INFO nova.virt.libvirt.driver [None req-39d9a1db-a2f0-4305-89f9-38db7abb16e0 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Successfully detached device vdb from instance 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d from the live domain config.#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.380 2 DEBUG nova.objects.instance [None req-39d9a1db-a2f0-4305-89f9-38db7abb16e0 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lazy-loading 'flavor' on Instance uuid 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.437 2 DEBUG oslo_concurrency.lockutils [None req-39d9a1db-a2f0-4305-89f9-38db7abb16e0 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:54:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190536529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.631 2 DEBUG oslo_concurrency.processutils [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.638 2 DEBUG nova.compute.provider_tree [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.657 2 DEBUG nova.scheduler.client.report [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.681 2 DEBUG oslo_concurrency.lockutils [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.728 2 INFO nova.scheduler.client.report [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Deleted allocations for instance 17e293fc-58db-41da-a59c-d4a11dcbe09e#033[00m
Oct 10 23:54:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 264 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 352 KiB/s rd, 314 KiB/s wr, 65 op/s
Oct 10 23:54:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:54:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2955113760' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:54:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:54:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2955113760' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:54:52 np0005480824 nova_compute[260089]: 2025-10-11 03:54:52.830 2 DEBUG oslo_concurrency.lockutils [None req-44e16a0c-f25d-4319-aa65-96dad01edc0d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "17e293fc-58db-41da-a59c-d4a11dcbe09e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.344 2 DEBUG oslo_concurrency.lockutils [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Acquiring lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.345 2 DEBUG oslo_concurrency.lockutils [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.345 2 DEBUG oslo_concurrency.lockutils [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Acquiring lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.345 2 DEBUG oslo_concurrency.lockutils [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.346 2 DEBUG oslo_concurrency.lockutils [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.347 2 INFO nova.compute.manager [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Terminating instance#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.347 2 DEBUG nova.compute.manager [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:54:53 np0005480824 kernel: tap96788aff-c4 (unregistering): left promiscuous mode
Oct 10 23:54:53 np0005480824 NetworkManager[44969]: <info>  [1760154893.4092] device (tap96788aff-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.412 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:53 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:53Z|00159|binding|INFO|Releasing lport 96788aff-c48f-4de5-a500-c62a76db51e3 from this chassis (sb_readonly=0)
Oct 10 23:54:53 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:53Z|00160|binding|INFO|Setting lport 96788aff-c48f-4de5-a500-c62a76db51e3 down in Southbound
Oct 10 23:54:53 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:53Z|00161|binding|INFO|Removing iface tap96788aff-c4 ovn-installed in OVS
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.432 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:26:da 10.100.0.7'], port_security=['fa:16:3e:46:26:da 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6fc56e59-9278-4ac2-89ed-ca93f2f17d1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bc155be8024d49b0ab4279dfca944e7d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b1bdb3f3-fea2-4df6-9718-0ba3c20debac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72ccff5f-b852-4556-9ac1-543256a57a7a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=96788aff-c48f-4de5-a500-c62a76db51e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.434 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 96788aff-c48f-4de5-a500-c62a76db51e3 in datapath 5bb06f57-fdf3-4bab-b3b4-81f9264d8f31 unbound from our chassis#033[00m
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.435 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5bb06f57-fdf3-4bab-b3b4-81f9264d8f31, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.436 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e662b0-b9b8-4ca6-b53f-3e5f53aabac5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.437 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31 namespace which is not needed anymore#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:53 np0005480824 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct 10 23:54:53 np0005480824 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 17.591s CPU time.
Oct 10 23:54:53 np0005480824 systemd-machined[215071]: Machine qemu-16-instance-00000010 terminated.
Oct 10 23:54:53 np0005480824 neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31[286835]: [NOTICE]   (286854) : haproxy version is 2.8.14-c23fe91
Oct 10 23:54:53 np0005480824 neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31[286835]: [NOTICE]   (286854) : path to executable is /usr/sbin/haproxy
Oct 10 23:54:53 np0005480824 neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31[286835]: [WARNING]  (286854) : Exiting Master process...
Oct 10 23:54:53 np0005480824 neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31[286835]: [WARNING]  (286854) : Exiting Master process...
Oct 10 23:54:53 np0005480824 neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31[286835]: [ALERT]    (286854) : Current worker (286856) exited with code 143 (Terminated)
Oct 10 23:54:53 np0005480824 neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31[286835]: [WARNING]  (286854) : All workers exited. Exiting... (0)
Oct 10 23:54:53 np0005480824 systemd[1]: libpod-0b14503ea8c99d19f59456ae720e14a91f5502258bc84033fe3065733d8d0290.scope: Deactivated successfully.
Oct 10 23:54:53 np0005480824 podman[287822]: 2025-10-11 03:54:53.581861481 +0000 UTC m=+0.051698633 container died 0b14503ea8c99d19f59456ae720e14a91f5502258bc84033fe3065733d8d0290 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.597 2 INFO nova.virt.libvirt.driver [-] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Instance destroyed successfully.#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.598 2 DEBUG nova.objects.instance [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lazy-loading 'resources' on Instance uuid 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:54:53 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b14503ea8c99d19f59456ae720e14a91f5502258bc84033fe3065733d8d0290-userdata-shm.mount: Deactivated successfully.
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.615 2 DEBUG nova.virt.libvirt.vif [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-239614500',display_name='tempest-TestEncryptedCinderVolumes-server-239614500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-239614500',id=16,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMwxG/zesopBoPVz/9rAVJe3A5xc8Hswv45IHelamhTcP5G1hd1+D+iZm+B8qAqlvTb69iH7x/3vOfviPjx+iwLDGXWTBUSEGeDUceEgUvv2oMFHBA+QIfr5/C1y+DQYKw==',key_name='tempest-keypair-2024620623',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:54:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bc155be8024d49b0ab4279dfca944e7d',ramdisk_id='',reservation_id='r-2b97b1ft',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1105996806',owner_user_name='tempest-TestEncryptedCinderVolumes-1105996806-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:54:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f24819cdb3ee4b1f8a4a9e811a760a2c',uuid=6fc56e59-9278-4ac2-89ed-ca93f2f17d1d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96788aff-c48f-4de5-a500-c62a76db51e3", "address": "fa:16:3e:46:26:da", "network": {"id": "5bb06f57-fdf3-4bab-b3b4-81f9264d8f31", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1614333098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc155be8024d49b0ab4279dfca944e7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96788aff-c4", "ovs_interfaceid": "96788aff-c48f-4de5-a500-c62a76db51e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.615 2 DEBUG nova.network.os_vif_util [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Converting VIF {"id": "96788aff-c48f-4de5-a500-c62a76db51e3", "address": "fa:16:3e:46:26:da", "network": {"id": "5bb06f57-fdf3-4bab-b3b4-81f9264d8f31", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1614333098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bc155be8024d49b0ab4279dfca944e7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96788aff-c4", "ovs_interfaceid": "96788aff-c48f-4de5-a500-c62a76db51e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.617 2 DEBUG nova.network.os_vif_util [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:26:da,bridge_name='br-int',has_traffic_filtering=True,id=96788aff-c48f-4de5-a500-c62a76db51e3,network=Network(5bb06f57-fdf3-4bab-b3b4-81f9264d8f31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96788aff-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.617 2 DEBUG os_vif [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:26:da,bridge_name='br-int',has_traffic_filtering=True,id=96788aff-c48f-4de5-a500-c62a76db51e3,network=Network(5bb06f57-fdf3-4bab-b3b4-81f9264d8f31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96788aff-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96788aff-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:53 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3b44eea4f7bff4e1f1dbd44e51c9fd598ea50608d581f0332d444852df9eb3fe-merged.mount: Deactivated successfully.
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.629 2 INFO os_vif [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:26:da,bridge_name='br-int',has_traffic_filtering=True,id=96788aff-c48f-4de5-a500-c62a76db51e3,network=Network(5bb06f57-fdf3-4bab-b3b4-81f9264d8f31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96788aff-c4')#033[00m
Oct 10 23:54:53 np0005480824 podman[287822]: 2025-10-11 03:54:53.633324418 +0000 UTC m=+0.103161570 container cleanup 0b14503ea8c99d19f59456ae720e14a91f5502258bc84033fe3065733d8d0290 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 10 23:54:53 np0005480824 systemd[1]: libpod-conmon-0b14503ea8c99d19f59456ae720e14a91f5502258bc84033fe3065733d8d0290.scope: Deactivated successfully.
Oct 10 23:54:53 np0005480824 podman[287871]: 2025-10-11 03:54:53.73197631 +0000 UTC m=+0.065598012 container remove 0b14503ea8c99d19f59456ae720e14a91f5502258bc84033fe3065733d8d0290 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.740 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd20a18-c5fa-4daf-a4e7-e5e29ca50d69]: (4, ('Sat Oct 11 03:54:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31 (0b14503ea8c99d19f59456ae720e14a91f5502258bc84033fe3065733d8d0290)\n0b14503ea8c99d19f59456ae720e14a91f5502258bc84033fe3065733d8d0290\nSat Oct 11 03:54:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31 (0b14503ea8c99d19f59456ae720e14a91f5502258bc84033fe3065733d8d0290)\n0b14503ea8c99d19f59456ae720e14a91f5502258bc84033fe3065733d8d0290\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.744 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[98f22d9c-6815-4724-98e4-792908dba263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.745 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bb06f57-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:53 np0005480824 kernel: tap5bb06f57-f0: left promiscuous mode
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.764 2 DEBUG nova.compute.manager [req-8eb43e99-3a25-4be8-ad47-5168ef413062 req-b947829a-09e0-4a0b-8019-9198c585b62b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Received event network-vif-unplugged-96788aff-c48f-4de5-a500-c62a76db51e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.764 2 DEBUG oslo_concurrency.lockutils [req-8eb43e99-3a25-4be8-ad47-5168ef413062 req-b947829a-09e0-4a0b-8019-9198c585b62b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.765 2 DEBUG oslo_concurrency.lockutils [req-8eb43e99-3a25-4be8-ad47-5168ef413062 req-b947829a-09e0-4a0b-8019-9198c585b62b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.765 2 DEBUG oslo_concurrency.lockutils [req-8eb43e99-3a25-4be8-ad47-5168ef413062 req-b947829a-09e0-4a0b-8019-9198c585b62b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.765 2 DEBUG nova.compute.manager [req-8eb43e99-3a25-4be8-ad47-5168ef413062 req-b947829a-09e0-4a0b-8019-9198c585b62b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] No waiting events found dispatching network-vif-unplugged-96788aff-c48f-4de5-a500-c62a76db51e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.766 2 DEBUG nova.compute.manager [req-8eb43e99-3a25-4be8-ad47-5168ef413062 req-b947829a-09e0-4a0b-8019-9198c585b62b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Received event network-vif-unplugged-96788aff-c48f-4de5-a500-c62a76db51e3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.776 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1068eb04-e3b8-4a70-9b1f-168ae5794792]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.811 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4bfc6b-e0b0-454d-8142-1edee4b5a5a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.813 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[de6c170b-72d0-497e-bcc7-eb3746a12777]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.835 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[cc863de5-0755-4e36-88d9-ab0431505e19]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431580, 'reachable_time': 17631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287897, 'error': None, 'target': 'ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:53 np0005480824 systemd[1]: run-netns-ovnmeta\x2d5bb06f57\x2dfdf3\x2d4bab\x2db3b4\x2d81f9264d8f31.mount: Deactivated successfully.
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.845 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5bb06f57-fdf3-4bab-b3b4-81f9264d8f31 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:54:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:53.845 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[8593c1b8-0b1a-4a69-8d5a-26b7a88a807c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:53 np0005480824 nova_compute[260089]: 2025-10-11 03:54:53.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:54 np0005480824 nova_compute[260089]: 2025-10-11 03:54:54.093 2 INFO nova.virt.libvirt.driver [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Deleting instance files /var/lib/nova/instances/6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_del#033[00m
Oct 10 23:54:54 np0005480824 nova_compute[260089]: 2025-10-11 03:54:54.094 2 INFO nova.virt.libvirt.driver [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Deletion of /var/lib/nova/instances/6fc56e59-9278-4ac2-89ed-ca93f2f17d1d_del complete#033[00m
Oct 10 23:54:54 np0005480824 nova_compute[260089]: 2025-10-11 03:54:54.187 2 INFO nova.compute.manager [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:54:54 np0005480824 nova_compute[260089]: 2025-10-11 03:54:54.189 2 DEBUG oslo.service.loopingcall [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:54:54 np0005480824 nova_compute[260089]: 2025-10-11 03:54:54.189 2 DEBUG nova.compute.manager [-] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:54:54 np0005480824 nova_compute[260089]: 2025-10-11 03:54:54.189 2 DEBUG nova.network.neutron [-] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:54:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 264 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 253 KiB/s wr, 52 op/s
Oct 10 23:54:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:54:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Oct 10 23:54:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Oct 10 23:54:54 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Oct 10 23:54:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:54:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2276232840' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:54:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:54:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2276232840' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.306 2 DEBUG nova.network.neutron [-] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.341 2 INFO nova.compute.manager [-] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Took 1.15 seconds to deallocate network for instance.#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.409 2 DEBUG oslo_concurrency.lockutils [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.410 2 DEBUG oslo_concurrency.lockutils [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.450 2 DEBUG oslo_concurrency.lockutils [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.451 2 DEBUG oslo_concurrency.lockutils [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.452 2 DEBUG oslo_concurrency.lockutils [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.452 2 DEBUG oslo_concurrency.lockutils [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.453 2 DEBUG oslo_concurrency.lockutils [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.455 2 INFO nova.compute.manager [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Terminating instance#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.457 2 DEBUG nova.compute.manager [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.496 2 DEBUG oslo_concurrency.processutils [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:55 np0005480824 kernel: tap16c1f566-62 (unregistering): left promiscuous mode
Oct 10 23:54:55 np0005480824 NetworkManager[44969]: <info>  [1760154895.5470] device (tap16c1f566-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:55 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:55Z|00162|binding|INFO|Releasing lport 16c1f566-62ec-4bf8-ae0e-225e1fad3288 from this chassis (sb_readonly=0)
Oct 10 23:54:55 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:55Z|00163|binding|INFO|Setting lport 16c1f566-62ec-4bf8-ae0e-225e1fad3288 down in Southbound
Oct 10 23:54:55 np0005480824 ovn_controller[152667]: 2025-10-11T03:54:55Z|00164|binding|INFO|Removing iface tap16c1f566-62 ovn-installed in OVS
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:55.576 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:c5:07 10.100.0.8'], port_security=['fa:16:3e:2e:c5:07 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba9c01f8-cb0e-4564-879e-fb3102e2e76a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d21391a321476eb133317b3402b0f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b740a105-f534-494b-b496-8cac5be77a8c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d98e64fb-092d-4777-b741-426f3e849bc3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=16c1f566-62ec-4bf8-ae0e-225e1fad3288) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:54:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:55.578 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 16c1f566-62ec-4bf8-ae0e-225e1fad3288 in datapath 359720eb-a957-4bcd-b9b2-3cf7dad947e4 unbound from our chassis#033[00m
Oct 10 23:54:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:55.579 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 359720eb-a957-4bcd-b9b2-3cf7dad947e4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:54:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:55.580 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5ad669-8abe-4be5-ac1e-2761505b789d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:55.582 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 namespace which is not needed anymore#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:55 np0005480824 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct 10 23:54:55 np0005480824 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 17.013s CPU time.
Oct 10 23:54:55 np0005480824 systemd-machined[215071]: Machine qemu-15-instance-0000000f terminated.
Oct 10 23:54:55 np0005480824 NetworkManager[44969]: <info>  [1760154895.6805] manager: (tap16c1f566-62): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.711 2 INFO nova.virt.libvirt.driver [-] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Instance destroyed successfully.#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.711 2 DEBUG nova.objects.instance [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lazy-loading 'resources' on Instance uuid ba9c01f8-cb0e-4564-879e-fb3102e2e76a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.729 2 DEBUG nova.virt.libvirt.vif [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:53:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-389630532',display_name='tempest-TestVolumeBootPattern-volume-backed-server-389630532',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-389630532',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNdsq0q6B8LLSTQOpXwgXtrUk68A/EZelLgWiyuKR8TpW9qyzq4tTNFzxDWNQ+8A+Y3cKPBcyFdStuqUeSJbmXMELun344mij5AlgaCiQijig8YhYJfvn1letXvyUQf2SA==',key_name='tempest-keypair-1172500857',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:53:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-r3bv8vh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:53:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='38ebc503771e417aaf1f3aea0c835994',uuid=ba9c01f8-cb0e-4564-879e-fb3102e2e76a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "address": "fa:16:3e:2e:c5:07", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c1f566-62", "ovs_interfaceid": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.730 2 DEBUG nova.network.os_vif_util [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "address": "fa:16:3e:2e:c5:07", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16c1f566-62", "ovs_interfaceid": "16c1f566-62ec-4bf8-ae0e-225e1fad3288", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.731 2 DEBUG nova.network.os_vif_util [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:c5:07,bridge_name='br-int',has_traffic_filtering=True,id=16c1f566-62ec-4bf8-ae0e-225e1fad3288,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16c1f566-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.735 2 DEBUG os_vif [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:c5:07,bridge_name='br-int',has_traffic_filtering=True,id=16c1f566-62ec-4bf8-ae0e-225e1fad3288,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16c1f566-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.738 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16c1f566-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.745 2 INFO os_vif [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:c5:07,bridge_name='br-int',has_traffic_filtering=True,id=16c1f566-62ec-4bf8-ae0e-225e1fad3288,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16c1f566-62')#033[00m
Oct 10 23:54:55 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[285827]: [NOTICE]   (285845) : haproxy version is 2.8.14-c23fe91
Oct 10 23:54:55 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[285827]: [NOTICE]   (285845) : path to executable is /usr/sbin/haproxy
Oct 10 23:54:55 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[285827]: [ALERT]    (285845) : Current worker (285853) exited with code 143 (Terminated)
Oct 10 23:54:55 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[285827]: [WARNING]  (285845) : All workers exited. Exiting... (0)
Oct 10 23:54:55 np0005480824 systemd[1]: libpod-608e6ef85b1f6976b361ecde87ef91721673e5300f5cff68574bd66926f93d83.scope: Deactivated successfully.
Oct 10 23:54:55 np0005480824 podman[287942]: 2025-10-11 03:54:55.761058465 +0000 UTC m=+0.056922496 container died 608e6ef85b1f6976b361ecde87ef91721673e5300f5cff68574bd66926f93d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:54:55 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-608e6ef85b1f6976b361ecde87ef91721673e5300f5cff68574bd66926f93d83-userdata-shm.mount: Deactivated successfully.
Oct 10 23:54:55 np0005480824 systemd[1]: var-lib-containers-storage-overlay-68228903e1eedf2a283aebf45c9d1ef49e682cc67396be3c8ddafbc2f2570ea4-merged.mount: Deactivated successfully.
Oct 10 23:54:55 np0005480824 podman[287942]: 2025-10-11 03:54:55.804285037 +0000 UTC m=+0.100149068 container cleanup 608e6ef85b1f6976b361ecde87ef91721673e5300f5cff68574bd66926f93d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 10 23:54:55 np0005480824 systemd[1]: libpod-conmon-608e6ef85b1f6976b361ecde87ef91721673e5300f5cff68574bd66926f93d83.scope: Deactivated successfully.
Oct 10 23:54:55 np0005480824 podman[287996]: 2025-10-11 03:54:55.865629168 +0000 UTC m=+0.042778813 container remove 608e6ef85b1f6976b361ecde87ef91721673e5300f5cff68574bd66926f93d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 10 23:54:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:55.871 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[efe5fc5a-8e81-4af6-99ba-284e9b7a81d6]: (4, ('Sat Oct 11 03:54:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 (608e6ef85b1f6976b361ecde87ef91721673e5300f5cff68574bd66926f93d83)\n608e6ef85b1f6976b361ecde87ef91721673e5300f5cff68574bd66926f93d83\nSat Oct 11 03:54:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 (608e6ef85b1f6976b361ecde87ef91721673e5300f5cff68574bd66926f93d83)\n608e6ef85b1f6976b361ecde87ef91721673e5300f5cff68574bd66926f93d83\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:55.873 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d08022be-c2dc-47b8-b2ee-75dff79a5180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:55.874 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359720eb-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.888 2 DEBUG nova.compute.manager [req-ab64e3f6-df9f-4867-92ec-7fb4cac11fdb req-7b56fecb-81f9-4391-bcbe-496bb2bbdb6c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Received event network-vif-plugged-96788aff-c48f-4de5-a500-c62a76db51e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.888 2 DEBUG oslo_concurrency.lockutils [req-ab64e3f6-df9f-4867-92ec-7fb4cac11fdb req-7b56fecb-81f9-4391-bcbe-496bb2bbdb6c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.889 2 DEBUG oslo_concurrency.lockutils [req-ab64e3f6-df9f-4867-92ec-7fb4cac11fdb req-7b56fecb-81f9-4391-bcbe-496bb2bbdb6c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.889 2 DEBUG oslo_concurrency.lockutils [req-ab64e3f6-df9f-4867-92ec-7fb4cac11fdb req-7b56fecb-81f9-4391-bcbe-496bb2bbdb6c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.889 2 DEBUG nova.compute.manager [req-ab64e3f6-df9f-4867-92ec-7fb4cac11fdb req-7b56fecb-81f9-4391-bcbe-496bb2bbdb6c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] No waiting events found dispatching network-vif-plugged-96788aff-c48f-4de5-a500-c62a76db51e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.889 2 WARNING nova.compute.manager [req-ab64e3f6-df9f-4867-92ec-7fb4cac11fdb req-7b56fecb-81f9-4391-bcbe-496bb2bbdb6c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Received unexpected event network-vif-plugged-96788aff-c48f-4de5-a500-c62a76db51e3 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.889 2 DEBUG nova.compute.manager [req-ab64e3f6-df9f-4867-92ec-7fb4cac11fdb req-7b56fecb-81f9-4391-bcbe-496bb2bbdb6c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Received event network-vif-deleted-96788aff-c48f-4de5-a500-c62a76db51e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.890 2 DEBUG nova.compute.manager [req-ab64e3f6-df9f-4867-92ec-7fb4cac11fdb req-7b56fecb-81f9-4391-bcbe-496bb2bbdb6c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Received event network-vif-unplugged-16c1f566-62ec-4bf8-ae0e-225e1fad3288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.890 2 DEBUG oslo_concurrency.lockutils [req-ab64e3f6-df9f-4867-92ec-7fb4cac11fdb req-7b56fecb-81f9-4391-bcbe-496bb2bbdb6c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.890 2 DEBUG oslo_concurrency.lockutils [req-ab64e3f6-df9f-4867-92ec-7fb4cac11fdb req-7b56fecb-81f9-4391-bcbe-496bb2bbdb6c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.890 2 DEBUG oslo_concurrency.lockutils [req-ab64e3f6-df9f-4867-92ec-7fb4cac11fdb req-7b56fecb-81f9-4391-bcbe-496bb2bbdb6c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.890 2 DEBUG nova.compute.manager [req-ab64e3f6-df9f-4867-92ec-7fb4cac11fdb req-7b56fecb-81f9-4391-bcbe-496bb2bbdb6c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] No waiting events found dispatching network-vif-unplugged-16c1f566-62ec-4bf8-ae0e-225e1fad3288 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.891 2 DEBUG nova.compute.manager [req-ab64e3f6-df9f-4867-92ec-7fb4cac11fdb req-7b56fecb-81f9-4391-bcbe-496bb2bbdb6c 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Received event network-vif-unplugged-16c1f566-62ec-4bf8-ae0e-225e1fad3288 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:54:55 np0005480824 kernel: tap359720eb-a0: left promiscuous mode
Oct 10 23:54:55 np0005480824 nova_compute[260089]: 2025-10-11 03:54:55.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:55.895 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c258fb9d-7c6a-4728-a632-97af726f1dec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:55.918 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[862d23e6-ad4f-476b-aac8-b93da4d3f828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:55.920 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e23730de-adbf-4ec9-895d-758e51c1592a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:55.935 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[141f7c35-4ff4-47db-ab6b-4f6cba62c044]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428053, 'reachable_time': 33274, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288011, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:55 np0005480824 systemd[1]: run-netns-ovnmeta\x2d359720eb\x2da957\x2d4bcd\x2db9b2\x2d3cf7dad947e4.mount: Deactivated successfully.
Oct 10 23:54:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:55.938 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:54:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:54:55.939 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[f70d64c2-0e5b-4c40-8014-c21d74b2c91f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:54:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Oct 10 23:54:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Oct 10 23:54:55 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Oct 10 23:54:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:54:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/246344619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:54:56 np0005480824 nova_compute[260089]: 2025-10-11 03:54:56.017 2 INFO nova.virt.libvirt.driver [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Deleting instance files /var/lib/nova/instances/ba9c01f8-cb0e-4564-879e-fb3102e2e76a_del#033[00m
Oct 10 23:54:56 np0005480824 nova_compute[260089]: 2025-10-11 03:54:56.019 2 INFO nova.virt.libvirt.driver [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Deletion of /var/lib/nova/instances/ba9c01f8-cb0e-4564-879e-fb3102e2e76a_del complete#033[00m
Oct 10 23:54:56 np0005480824 nova_compute[260089]: 2025-10-11 03:54:56.023 2 DEBUG oslo_concurrency.processutils [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:56 np0005480824 nova_compute[260089]: 2025-10-11 03:54:56.029 2 DEBUG nova.compute.provider_tree [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:54:56 np0005480824 nova_compute[260089]: 2025-10-11 03:54:56.057 2 DEBUG nova.scheduler.client.report [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:54:56 np0005480824 nova_compute[260089]: 2025-10-11 03:54:56.095 2 DEBUG oslo_concurrency.lockutils [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:56 np0005480824 nova_compute[260089]: 2025-10-11 03:54:56.104 2 INFO nova.compute.manager [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:54:56 np0005480824 nova_compute[260089]: 2025-10-11 03:54:56.105 2 DEBUG oslo.service.loopingcall [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:54:56 np0005480824 nova_compute[260089]: 2025-10-11 03:54:56.105 2 DEBUG nova.compute.manager [-] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:54:56 np0005480824 nova_compute[260089]: 2025-10-11 03:54:56.106 2 DEBUG nova.network.neutron [-] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:54:56 np0005480824 nova_compute[260089]: 2025-10-11 03:54:56.138 2 INFO nova.scheduler.client.report [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Deleted allocations for instance 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d#033[00m
Oct 10 23:54:56 np0005480824 nova_compute[260089]: 2025-10-11 03:54:56.212 2 DEBUG oslo_concurrency.lockutils [None req-41bfdf53-389b-47ea-a2b7-5c3b89221e82 f24819cdb3ee4b1f8a4a9e811a760a2c bc155be8024d49b0ab4279dfca944e7d - - default default] Lock "6fc56e59-9278-4ac2-89ed-ca93f2f17d1d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 222 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 24 KiB/s wr, 102 op/s
Oct 10 23:54:57 np0005480824 nova_compute[260089]: 2025-10-11 03:54:57.154 2 DEBUG nova.network.neutron [-] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:54:57 np0005480824 nova_compute[260089]: 2025-10-11 03:54:57.181 2 INFO nova.compute.manager [-] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Took 1.08 seconds to deallocate network for instance.#033[00m
Oct 10 23:54:57 np0005480824 nova_compute[260089]: 2025-10-11 03:54:57.639 2 INFO nova.compute.manager [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Took 0.46 seconds to detach 1 volumes for instance.#033[00m
Oct 10 23:54:57 np0005480824 nova_compute[260089]: 2025-10-11 03:54:57.641 2 DEBUG nova.compute.manager [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Deleting volume: be5dc6c3-9ee3-45f1-9e6c-1fecc35321b7 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct 10 23:54:57 np0005480824 nova_compute[260089]: 2025-10-11 03:54:57.873 2 DEBUG oslo_concurrency.lockutils [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:57 np0005480824 nova_compute[260089]: 2025-10-11 03:54:57.874 2 DEBUG oslo_concurrency.lockutils [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:54:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:54:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:54:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:54:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:54:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:54:57 np0005480824 nova_compute[260089]: 2025-10-11 03:54:57.943 2 DEBUG oslo_concurrency.processutils [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:54:58 np0005480824 nova_compute[260089]: 2025-10-11 03:54:58.093 2 DEBUG nova.compute.manager [req-91672cf4-d4bd-44f7-bdda-79c8a6111277 req-809c5e3c-0735-482c-8738-62ded7e3dba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Received event network-vif-plugged-16c1f566-62ec-4bf8-ae0e-225e1fad3288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:58 np0005480824 nova_compute[260089]: 2025-10-11 03:54:58.094 2 DEBUG oslo_concurrency.lockutils [req-91672cf4-d4bd-44f7-bdda-79c8a6111277 req-809c5e3c-0735-482c-8738-62ded7e3dba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:54:58 np0005480824 nova_compute[260089]: 2025-10-11 03:54:58.095 2 DEBUG oslo_concurrency.lockutils [req-91672cf4-d4bd-44f7-bdda-79c8a6111277 req-809c5e3c-0735-482c-8738-62ded7e3dba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:54:58 np0005480824 nova_compute[260089]: 2025-10-11 03:54:58.095 2 DEBUG oslo_concurrency.lockutils [req-91672cf4-d4bd-44f7-bdda-79c8a6111277 req-809c5e3c-0735-482c-8738-62ded7e3dba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:58 np0005480824 nova_compute[260089]: 2025-10-11 03:54:58.096 2 DEBUG nova.compute.manager [req-91672cf4-d4bd-44f7-bdda-79c8a6111277 req-809c5e3c-0735-482c-8738-62ded7e3dba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] No waiting events found dispatching network-vif-plugged-16c1f566-62ec-4bf8-ae0e-225e1fad3288 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:54:58 np0005480824 nova_compute[260089]: 2025-10-11 03:54:58.096 2 WARNING nova.compute.manager [req-91672cf4-d4bd-44f7-bdda-79c8a6111277 req-809c5e3c-0735-482c-8738-62ded7e3dba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Received unexpected event network-vif-plugged-16c1f566-62ec-4bf8-ae0e-225e1fad3288 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 23:54:58 np0005480824 nova_compute[260089]: 2025-10-11 03:54:58.096 2 DEBUG nova.compute.manager [req-91672cf4-d4bd-44f7-bdda-79c8a6111277 req-809c5e3c-0735-482c-8738-62ded7e3dba2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Received event network-vif-deleted-16c1f566-62ec-4bf8-ae0e-225e1fad3288 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:54:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:54:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1199193747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:54:58 np0005480824 nova_compute[260089]: 2025-10-11 03:54:58.465 2 DEBUG oslo_concurrency.processutils [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:54:58 np0005480824 nova_compute[260089]: 2025-10-11 03:54:58.474 2 DEBUG nova.compute.provider_tree [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:54:58 np0005480824 nova_compute[260089]: 2025-10-11 03:54:58.494 2 DEBUG nova.scheduler.client.report [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:54:58 np0005480824 nova_compute[260089]: 2025-10-11 03:54:58.518 2 DEBUG oslo_concurrency.lockutils [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:54:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3323310508' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:54:58 np0005480824 nova_compute[260089]: 2025-10-11 03:54:58.549 2 INFO nova.scheduler.client.report [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Deleted allocations for instance ba9c01f8-cb0e-4564-879e-fb3102e2e76a#033[00m
Oct 10 23:54:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:54:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3323310508' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:54:58 np0005480824 nova_compute[260089]: 2025-10-11 03:54:58.664 2 DEBUG oslo_concurrency.lockutils [None req-5e9a0c31-d319-4a90-aa65-1bbe21fc6a8c 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ba9c01f8-cb0e-4564-879e-fb3102e2e76a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:54:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:54:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3684309923' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:54:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:54:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3684309923' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:54:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 167 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 208 KiB/s rd, 27 KiB/s wr, 242 op/s
Oct 10 23:54:58 np0005480824 nova_compute[260089]: 2025-10-11 03:54:58.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:54:59 np0005480824 podman[288038]: 2025-10-11 03:54:59.004806781 +0000 UTC m=+0.060812269 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:54:59 np0005480824 podman[288037]: 2025-10-11 03:54:59.034082883 +0000 UTC m=+0.091407152 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 10 23:54:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:54:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Oct 10 23:54:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Oct 10 23:54:59 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Oct 10 23:55:00 np0005480824 nova_compute[260089]: 2025-10-11 03:55:00.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 167 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 11 KiB/s wr, 247 op/s
Oct 10 23:55:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Oct 10 23:55:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Oct 10 23:55:02 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Oct 10 23:55:02 np0005480824 nova_compute[260089]: 2025-10-11 03:55:02.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 88 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 223 KiB/s rd, 14 KiB/s wr, 307 op/s
Oct 10 23:55:02 np0005480824 nova_compute[260089]: 2025-10-11 03:55:02.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:55:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4197501873' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:55:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:55:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4197501873' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:55:03 np0005480824 nova_compute[260089]: 2025-10-11 03:55:03.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:04 np0005480824 nova_compute[260089]: 2025-10-11 03:55:04.489 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154889.4883645, 17e293fc-58db-41da-a59c-d4a11dcbe09e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:55:04 np0005480824 nova_compute[260089]: 2025-10-11 03:55:04.490 2 INFO nova.compute.manager [-] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:55:04 np0005480824 nova_compute[260089]: 2025-10-11 03:55:04.511 2 DEBUG nova.compute.manager [None req-4a7fe607-c717-4fbb-a6be-2e060eefeea2 - - - - - -] [instance: 17e293fc-58db-41da-a59c-d4a11dcbe09e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:55:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 88 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 9.7 KiB/s wr, 229 op/s
Oct 10 23:55:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:55:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:55:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3965436827' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:55:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:55:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3965436827' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:55:05 np0005480824 nova_compute[260089]: 2025-10-11 03:55:05.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:06 np0005480824 podman[288076]: 2025-10-11 03:55:06.058508095 +0000 UTC m=+0.112376117 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 23:55:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:55:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2670535304' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:55:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:55:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2670535304' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:55:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 88 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 5.5 KiB/s wr, 88 op/s
Oct 10 23:55:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:55:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1093820176' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:55:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:55:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1093820176' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:55:08 np0005480824 nova_compute[260089]: 2025-10-11 03:55:08.594 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154893.593585, 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:55:08 np0005480824 nova_compute[260089]: 2025-10-11 03:55:08.595 2 INFO nova.compute.manager [-] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:55:08 np0005480824 nova_compute[260089]: 2025-10-11 03:55:08.619 2 DEBUG nova.compute.manager [None req-bbe8ec95-f9af-4201-a02b-d006389b3e93 - - - - - -] [instance: 6fc56e59-9278-4ac2-89ed-ca93f2f17d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:55:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 5.8 KiB/s wr, 129 op/s
Oct 10 23:55:08 np0005480824 nova_compute[260089]: 2025-10-11 03:55:08.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:55:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3256893411' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:55:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:55:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3256893411' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:55:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:55:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Oct 10 23:55:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Oct 10 23:55:09 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Oct 10 23:55:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:10.500 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:10.500 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:10.500 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:10 np0005480824 nova_compute[260089]: 2025-10-11 03:55:10.703 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154895.7006721, ba9c01f8-cb0e-4564-879e-fb3102e2e76a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:55:10 np0005480824 nova_compute[260089]: 2025-10-11 03:55:10.704 2 INFO nova.compute.manager [-] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:55:10 np0005480824 nova_compute[260089]: 2025-10-11 03:55:10.727 2 DEBUG nova.compute.manager [None req-279e551a-6c73-4191-8198-074cfa0152e1 - - - - - -] [instance: ba9c01f8-cb0e-4564-879e-fb3102e2e76a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:55:10 np0005480824 nova_compute[260089]: 2025-10-11 03:55:10.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.8 KiB/s wr, 110 op/s
Oct 10 23:55:12 np0005480824 podman[288104]: 2025-10-11 03:55:12.009944261 +0000 UTC m=+0.067848995 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 23:55:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.7 KiB/s wr, 97 op/s
Oct 10 23:55:13 np0005480824 nova_compute[260089]: 2025-10-11 03:55:13.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.7 KiB/s wr, 97 op/s
Oct 10 23:55:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:55:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Oct 10 23:55:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Oct 10 23:55:15 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Oct 10 23:55:15 np0005480824 nova_compute[260089]: 2025-10-11 03:55:15.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.4 KiB/s wr, 52 op/s
Oct 10 23:55:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:55:17 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:55:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:55:17 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:55:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:55:17 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:55:17 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 8a4eaa18-187f-451a-8f4d-f71ad9764151 does not exist
Oct 10 23:55:17 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 3b608278-e573-4d04-bed5-cf250335a37e does not exist
Oct 10 23:55:17 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev ea3fc8ff-2da6-4409-af4d-02bad3fdff13 does not exist
Oct 10 23:55:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:55:17 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:55:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:55:17 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:55:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:55:17 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:55:17 np0005480824 nova_compute[260089]: 2025-10-11 03:55:17.874 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:17 np0005480824 nova_compute[260089]: 2025-10-11 03:55:17.875 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:17 np0005480824 nova_compute[260089]: 2025-10-11 03:55:17.896 2 DEBUG nova.compute.manager [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:55:17 np0005480824 nova_compute[260089]: 2025-10-11 03:55:17.995 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:17 np0005480824 nova_compute[260089]: 2025-10-11 03:55:17.996 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.005 2 DEBUG nova.virt.hardware [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.005 2 INFO nova.compute.claims [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.136 2 DEBUG oslo_concurrency.processutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:18 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:55:18 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:55:18 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:55:18 np0005480824 podman[288417]: 2025-10-11 03:55:18.468102916 +0000 UTC m=+0.066958264 container create 6536a83d9a46a573207d0b9a6f0642a089b2ee65b2432db75e2be0943d3bb3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_allen, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 10 23:55:18 np0005480824 podman[288417]: 2025-10-11 03:55:18.434371719 +0000 UTC m=+0.033227007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:55:18 np0005480824 systemd[1]: Started libpod-conmon-6536a83d9a46a573207d0b9a6f0642a089b2ee65b2432db75e2be0943d3bb3f7.scope.
Oct 10 23:55:18 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:55:18 np0005480824 podman[288417]: 2025-10-11 03:55:18.595378435 +0000 UTC m=+0.194233713 container init 6536a83d9a46a573207d0b9a6f0642a089b2ee65b2432db75e2be0943d3bb3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:55:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:55:18 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3830144886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:55:18 np0005480824 podman[288417]: 2025-10-11 03:55:18.606947959 +0000 UTC m=+0.205803177 container start 6536a83d9a46a573207d0b9a6f0642a089b2ee65b2432db75e2be0943d3bb3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 10 23:55:18 np0005480824 podman[288417]: 2025-10-11 03:55:18.612842408 +0000 UTC m=+0.211697696 container attach 6536a83d9a46a573207d0b9a6f0642a089b2ee65b2432db75e2be0943d3bb3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_allen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:55:18 np0005480824 amazing_allen[288433]: 167 167
Oct 10 23:55:18 np0005480824 podman[288417]: 2025-10-11 03:55:18.616892664 +0000 UTC m=+0.215747882 container died 6536a83d9a46a573207d0b9a6f0642a089b2ee65b2432db75e2be0943d3bb3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 23:55:18 np0005480824 systemd[1]: libpod-6536a83d9a46a573207d0b9a6f0642a089b2ee65b2432db75e2be0943d3bb3f7.scope: Deactivated successfully.
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.632 2 DEBUG oslo_concurrency.processutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.641 2 DEBUG nova.compute.provider_tree [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:55:18 np0005480824 systemd[1]: var-lib-containers-storage-overlay-fa8231aa63e5bcd8426e6cbc17fe9f23fe4f30cdd10522acfe258409a4af9923-merged.mount: Deactivated successfully.
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.657 2 DEBUG nova.scheduler.client.report [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:55:18 np0005480824 podman[288417]: 2025-10-11 03:55:18.666973188 +0000 UTC m=+0.265828386 container remove 6536a83d9a46a573207d0b9a6f0642a089b2ee65b2432db75e2be0943d3bb3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_allen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.677 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.679 2 DEBUG nova.compute.manager [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:55:18 np0005480824 systemd[1]: libpod-conmon-6536a83d9a46a573207d0b9a6f0642a089b2ee65b2432db75e2be0943d3bb3f7.scope: Deactivated successfully.
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.735 2 DEBUG nova.compute.manager [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.736 2 DEBUG nova.network.neutron [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.756 2 INFO nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:55:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 134 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 112 op/s
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.783 2 DEBUG nova.compute.manager [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.823 2 INFO nova.virt.block_device [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Booting with volume df022fd8-30bb-4c20-bf5c-0866de956c6d at /dev/vda#033[00m
Oct 10 23:55:18 np0005480824 podman[288457]: 2025-10-11 03:55:18.886024627 +0000 UTC m=+0.050966676 container create 73c5825ea2875871334a71e64b4f82beab5da531930948b3c590379630c26653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:55:18 np0005480824 systemd[1]: Started libpod-conmon-73c5825ea2875871334a71e64b4f82beab5da531930948b3c590379630c26653.scope.
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.949 2 DEBUG nova.policy [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38ebc503771e417aaf1f3aea0c835994', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55d21391a321476eb133317b3402b0f0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.959 2 DEBUG os_brick.utils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.962 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:18 np0005480824 podman[288457]: 2025-10-11 03:55:18.86244956 +0000 UTC m=+0.027391689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.980 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.982 676 DEBUG oslo.privsep.daemon [-] privsep: reply[e73e9854-1b3a-44d3-9f37-112c018aeffd]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.984 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.996 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.997 676 DEBUG oslo.privsep.daemon [-] privsep: reply[a9914ca0-4530-4995-8a48-5dd34b79ef5b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:18 np0005480824 nova_compute[260089]: 2025-10-11 03:55:18.999 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:19 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:55:19 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cfd0560b9fe6f946569da52fcc9f0e48717402e716e92b5f0eb3bec30d8ea23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:19 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cfd0560b9fe6f946569da52fcc9f0e48717402e716e92b5f0eb3bec30d8ea23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:19 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cfd0560b9fe6f946569da52fcc9f0e48717402e716e92b5f0eb3bec30d8ea23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:19 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cfd0560b9fe6f946569da52fcc9f0e48717402e716e92b5f0eb3bec30d8ea23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:19 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cfd0560b9fe6f946569da52fcc9f0e48717402e716e92b5f0eb3bec30d8ea23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:19 np0005480824 nova_compute[260089]: 2025-10-11 03:55:19.016 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:19 np0005480824 nova_compute[260089]: 2025-10-11 03:55:19.016 676 DEBUG oslo.privsep.daemon [-] privsep: reply[33fedd78-fc79-461f-8999-af13e5187f97]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:19 np0005480824 nova_compute[260089]: 2025-10-11 03:55:19.018 676 DEBUG oslo.privsep.daemon [-] privsep: reply[c11be09a-fba9-4417-b7f8-81f1baaa8613]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:19 np0005480824 nova_compute[260089]: 2025-10-11 03:55:19.019 2 DEBUG oslo_concurrency.processutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:19 np0005480824 podman[288457]: 2025-10-11 03:55:19.030689848 +0000 UTC m=+0.195632047 container init 73c5825ea2875871334a71e64b4f82beab5da531930948b3c590379630c26653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:55:19 np0005480824 podman[288457]: 2025-10-11 03:55:19.047236569 +0000 UTC m=+0.212178618 container start 73c5825ea2875871334a71e64b4f82beab5da531930948b3c590379630c26653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:55:19 np0005480824 podman[288457]: 2025-10-11 03:55:19.051148531 +0000 UTC m=+0.216090610 container attach 73c5825ea2875871334a71e64b4f82beab5da531930948b3c590379630c26653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cannon, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:55:19 np0005480824 nova_compute[260089]: 2025-10-11 03:55:19.054 2 DEBUG oslo_concurrency.processutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:19 np0005480824 nova_compute[260089]: 2025-10-11 03:55:19.058 2 DEBUG os_brick.initiator.connectors.lightos [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:55:19 np0005480824 nova_compute[260089]: 2025-10-11 03:55:19.058 2 DEBUG os_brick.initiator.connectors.lightos [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:55:19 np0005480824 nova_compute[260089]: 2025-10-11 03:55:19.058 2 DEBUG os_brick.initiator.connectors.lightos [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:55:19 np0005480824 nova_compute[260089]: 2025-10-11 03:55:19.059 2 DEBUG os_brick.utils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] <== get_connector_properties: return (98ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:55:19 np0005480824 nova_compute[260089]: 2025-10-11 03:55:19.059 2 DEBUG nova.virt.block_device [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Updating existing volume attachment record: 24e1393e-1635-4bb3-9cdc-cccddc2aa239 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:55:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Oct 10 23:55:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Oct 10 23:55:19 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Oct 10 23:55:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:55:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3366273214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:55:19 np0005480824 nova_compute[260089]: 2025-10-11 03:55:19.741 2 DEBUG nova.network.neutron [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Successfully created port: eb363ce6-15fe-4b2a-a35e-06b06bbf4252 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:55:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:55:20 np0005480824 nova_compute[260089]: 2025-10-11 03:55:20.015 2 DEBUG nova.compute.manager [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:55:20 np0005480824 nova_compute[260089]: 2025-10-11 03:55:20.017 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:55:20 np0005480824 nova_compute[260089]: 2025-10-11 03:55:20.017 2 INFO nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Creating image(s)#033[00m
Oct 10 23:55:20 np0005480824 nova_compute[260089]: 2025-10-11 03:55:20.018 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 10 23:55:20 np0005480824 nova_compute[260089]: 2025-10-11 03:55:20.018 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Ensure instance console log exists: /var/lib/nova/instances/ee0ba1fa-8740-4670-9f6d-b658f89f7f21/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:55:20 np0005480824 nova_compute[260089]: 2025-10-11 03:55:20.018 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:20 np0005480824 nova_compute[260089]: 2025-10-11 03:55:20.018 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:20 np0005480824 nova_compute[260089]: 2025-10-11 03:55:20.019 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:20 np0005480824 objective_cannon[288473]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:55:20 np0005480824 objective_cannon[288473]: --> relative data size: 1.0
Oct 10 23:55:20 np0005480824 objective_cannon[288473]: --> All data devices are unavailable
Oct 10 23:55:20 np0005480824 systemd[1]: libpod-73c5825ea2875871334a71e64b4f82beab5da531930948b3c590379630c26653.scope: Deactivated successfully.
Oct 10 23:55:20 np0005480824 systemd[1]: libpod-73c5825ea2875871334a71e64b4f82beab5da531930948b3c590379630c26653.scope: Consumed 1.088s CPU time.
Oct 10 23:55:20 np0005480824 podman[288457]: 2025-10-11 03:55:20.200416154 +0000 UTC m=+1.365358213 container died 73c5825ea2875871334a71e64b4f82beab5da531930948b3c590379630c26653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct 10 23:55:20 np0005480824 systemd[1]: var-lib-containers-storage-overlay-2cfd0560b9fe6f946569da52fcc9f0e48717402e716e92b5f0eb3bec30d8ea23-merged.mount: Deactivated successfully.
Oct 10 23:55:20 np0005480824 podman[288457]: 2025-10-11 03:55:20.265407991 +0000 UTC m=+1.430350030 container remove 73c5825ea2875871334a71e64b4f82beab5da531930948b3c590379630c26653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:55:20 np0005480824 systemd[1]: libpod-conmon-73c5825ea2875871334a71e64b4f82beab5da531930948b3c590379630c26653.scope: Deactivated successfully.
Oct 10 23:55:20 np0005480824 nova_compute[260089]: 2025-10-11 03:55:20.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 134 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.7 MiB/s wr, 73 op/s
Oct 10 23:55:21 np0005480824 nova_compute[260089]: 2025-10-11 03:55:21.089 2 DEBUG nova.network.neutron [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Successfully updated port: eb363ce6-15fe-4b2a-a35e-06b06bbf4252 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:55:21 np0005480824 podman[288661]: 2025-10-11 03:55:21.091235666 +0000 UTC m=+0.073245352 container create 70875503c8adc7e5976cac6bc0e3d53afa907f1a989c37c37eae88d8940b0563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 10 23:55:21 np0005480824 nova_compute[260089]: 2025-10-11 03:55:21.110 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "refresh_cache-ee0ba1fa-8740-4670-9f6d-b658f89f7f21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:55:21 np0005480824 nova_compute[260089]: 2025-10-11 03:55:21.111 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquired lock "refresh_cache-ee0ba1fa-8740-4670-9f6d-b658f89f7f21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:55:21 np0005480824 nova_compute[260089]: 2025-10-11 03:55:21.111 2 DEBUG nova.network.neutron [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:55:21 np0005480824 systemd[1]: Started libpod-conmon-70875503c8adc7e5976cac6bc0e3d53afa907f1a989c37c37eae88d8940b0563.scope.
Oct 10 23:55:21 np0005480824 podman[288661]: 2025-10-11 03:55:21.063936331 +0000 UTC m=+0.045946057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:55:21 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:55:21 np0005480824 nova_compute[260089]: 2025-10-11 03:55:21.192 2 DEBUG nova.compute.manager [req-64915f5a-4de1-4d1c-8639-0443c5a53dbf req-0ef6a76f-7d15-4dee-ae1b-f972cf5f9e97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Received event network-changed-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:55:21 np0005480824 nova_compute[260089]: 2025-10-11 03:55:21.192 2 DEBUG nova.compute.manager [req-64915f5a-4de1-4d1c-8639-0443c5a53dbf req-0ef6a76f-7d15-4dee-ae1b-f972cf5f9e97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Refreshing instance network info cache due to event network-changed-eb363ce6-15fe-4b2a-a35e-06b06bbf4252. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:55:21 np0005480824 nova_compute[260089]: 2025-10-11 03:55:21.193 2 DEBUG oslo_concurrency.lockutils [req-64915f5a-4de1-4d1c-8639-0443c5a53dbf req-0ef6a76f-7d15-4dee-ae1b-f972cf5f9e97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-ee0ba1fa-8740-4670-9f6d-b658f89f7f21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:55:21 np0005480824 podman[288661]: 2025-10-11 03:55:21.203400079 +0000 UTC m=+0.185409825 container init 70875503c8adc7e5976cac6bc0e3d53afa907f1a989c37c37eae88d8940b0563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 10 23:55:21 np0005480824 podman[288661]: 2025-10-11 03:55:21.212986075 +0000 UTC m=+0.194995771 container start 70875503c8adc7e5976cac6bc0e3d53afa907f1a989c37c37eae88d8940b0563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:55:21 np0005480824 podman[288661]: 2025-10-11 03:55:21.218079556 +0000 UTC m=+0.200089292 container attach 70875503c8adc7e5976cac6bc0e3d53afa907f1a989c37c37eae88d8940b0563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 10 23:55:21 np0005480824 strange_bardeen[288677]: 167 167
Oct 10 23:55:21 np0005480824 systemd[1]: libpod-70875503c8adc7e5976cac6bc0e3d53afa907f1a989c37c37eae88d8940b0563.scope: Deactivated successfully.
Oct 10 23:55:21 np0005480824 podman[288661]: 2025-10-11 03:55:21.228147314 +0000 UTC m=+0.210157000 container died 70875503c8adc7e5976cac6bc0e3d53afa907f1a989c37c37eae88d8940b0563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 10 23:55:21 np0005480824 nova_compute[260089]: 2025-10-11 03:55:21.250 2 DEBUG nova.network.neutron [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:55:21 np0005480824 systemd[1]: var-lib-containers-storage-overlay-f11a88d60af19905ee3f03bb6b621e63b3b5aeea15fb9fbc778c2ed408771f24-merged.mount: Deactivated successfully.
Oct 10 23:55:21 np0005480824 podman[288661]: 2025-10-11 03:55:21.282282764 +0000 UTC m=+0.264292450 container remove 70875503c8adc7e5976cac6bc0e3d53afa907f1a989c37c37eae88d8940b0563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:55:21 np0005480824 systemd[1]: libpod-conmon-70875503c8adc7e5976cac6bc0e3d53afa907f1a989c37c37eae88d8940b0563.scope: Deactivated successfully.
Oct 10 23:55:21 np0005480824 podman[288701]: 2025-10-11 03:55:21.485649432 +0000 UTC m=+0.058566536 container create e66a7f236e3aacfd8997f3b3063d10525544a0a57a4cfd4adc5bf6abfd35892e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_driscoll, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 10 23:55:21 np0005480824 systemd[1]: Started libpod-conmon-e66a7f236e3aacfd8997f3b3063d10525544a0a57a4cfd4adc5bf6abfd35892e.scope.
Oct 10 23:55:21 np0005480824 podman[288701]: 2025-10-11 03:55:21.459301999 +0000 UTC m=+0.032219193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:55:21 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:55:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1817da2154ed52367e83686fc0c66a6986f1d8a224bd4088b4f50d61107b872f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1817da2154ed52367e83686fc0c66a6986f1d8a224bd4088b4f50d61107b872f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1817da2154ed52367e83686fc0c66a6986f1d8a224bd4088b4f50d61107b872f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1817da2154ed52367e83686fc0c66a6986f1d8a224bd4088b4f50d61107b872f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:21 np0005480824 podman[288701]: 2025-10-11 03:55:21.601978012 +0000 UTC m=+0.174895226 container init e66a7f236e3aacfd8997f3b3063d10525544a0a57a4cfd4adc5bf6abfd35892e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_driscoll, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 23:55:21 np0005480824 podman[288701]: 2025-10-11 03:55:21.614218892 +0000 UTC m=+0.187136036 container start e66a7f236e3aacfd8997f3b3063d10525544a0a57a4cfd4adc5bf6abfd35892e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:55:21 np0005480824 podman[288701]: 2025-10-11 03:55:21.619074596 +0000 UTC m=+0.191991810 container attach e66a7f236e3aacfd8997f3b3063d10525544a0a57a4cfd4adc5bf6abfd35892e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 10 23:55:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Oct 10 23:55:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Oct 10 23:55:22 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.248 2 DEBUG nova.network.neutron [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Updating instance_info_cache with network_info: [{"id": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "address": "fa:16:3e:f7:bc:e9", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb363ce6-15", "ovs_interfaceid": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.267 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Releasing lock "refresh_cache-ee0ba1fa-8740-4670-9f6d-b658f89f7f21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.267 2 DEBUG nova.compute.manager [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Instance network_info: |[{"id": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "address": "fa:16:3e:f7:bc:e9", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb363ce6-15", "ovs_interfaceid": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.268 2 DEBUG oslo_concurrency.lockutils [req-64915f5a-4de1-4d1c-8639-0443c5a53dbf req-0ef6a76f-7d15-4dee-ae1b-f972cf5f9e97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-ee0ba1fa-8740-4670-9f6d-b658f89f7f21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.271 2 DEBUG nova.network.neutron [req-64915f5a-4de1-4d1c-8639-0443c5a53dbf req-0ef6a76f-7d15-4dee-ae1b-f972cf5f9e97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Refreshing network info cache for port eb363ce6-15fe-4b2a-a35e-06b06bbf4252 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.277 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Start _get_guest_xml network_info=[{"id": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "address": "fa:16:3e:f7:bc:e9", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb363ce6-15", "ovs_interfaceid": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '24e1393e-1635-4bb3-9cdc-cccddc2aa239', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-df022fd8-30bb-4c20-bf5c-0866de956c6d', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'df022fd8-30bb-4c20-bf5c-0866de956c6d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ee0ba1fa-8740-4670-9f6d-b658f89f7f21', 'attached_at': '', 'detached_at': '', 'volume_id': 'df022fd8-30bb-4c20-bf5c-0866de956c6d', 'serial': 'df022fd8-30bb-4c20-bf5c-0866de956c6d'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.289 2 WARNING nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.297 2 DEBUG nova.virt.libvirt.host [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.299 2 DEBUG nova.virt.libvirt.host [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.311 2 DEBUG nova.virt.libvirt.host [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.312 2 DEBUG nova.virt.libvirt.host [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.314 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.314 2 DEBUG nova.virt.hardware [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.315 2 DEBUG nova.virt.hardware [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.316 2 DEBUG nova.virt.hardware [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.317 2 DEBUG nova.virt.hardware [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.317 2 DEBUG nova.virt.hardware [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.318 2 DEBUG nova.virt.hardware [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.318 2 DEBUG nova.virt.hardware [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.319 2 DEBUG nova.virt.hardware [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.320 2 DEBUG nova.virt.hardware [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.321 2 DEBUG nova.virt.hardware [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.322 2 DEBUG nova.virt.hardware [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.358 2 DEBUG nova.storage.rbd_utils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image ee0ba1fa-8740-4670-9f6d-b658f89f7f21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.370 2 DEBUG oslo_concurrency.processutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]: {
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:    "0": [
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:        {
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "devices": [
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "/dev/loop3"
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            ],
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_name": "ceph_lv0",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_size": "21470642176",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "name": "ceph_lv0",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "tags": {
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.cluster_name": "ceph",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.crush_device_class": "",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.encrypted": "0",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.osd_id": "0",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.type": "block",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.vdo": "0"
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            },
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "type": "block",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "vg_name": "ceph_vg0"
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:        }
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:    ],
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:    "1": [
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:        {
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "devices": [
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "/dev/loop4"
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            ],
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_name": "ceph_lv1",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_size": "21470642176",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "name": "ceph_lv1",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "tags": {
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.cluster_name": "ceph",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.crush_device_class": "",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.encrypted": "0",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.osd_id": "1",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.type": "block",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.vdo": "0"
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            },
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "type": "block",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "vg_name": "ceph_vg1"
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:        }
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:    ],
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:    "2": [
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:        {
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "devices": [
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "/dev/loop5"
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            ],
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_name": "ceph_lv2",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_size": "21470642176",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "name": "ceph_lv2",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "tags": {
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.cluster_name": "ceph",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.crush_device_class": "",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.encrypted": "0",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.osd_id": "2",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.type": "block",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:                "ceph.vdo": "0"
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            },
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "type": "block",
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:            "vg_name": "ceph_vg2"
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:        }
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]:    ]
Oct 10 23:55:22 np0005480824 eager_driscoll[288718]: }
Oct 10 23:55:22 np0005480824 systemd[1]: libpod-e66a7f236e3aacfd8997f3b3063d10525544a0a57a4cfd4adc5bf6abfd35892e.scope: Deactivated successfully.
Oct 10 23:55:22 np0005480824 conmon[288718]: conmon e66a7f236e3aacfd8997 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e66a7f236e3aacfd8997f3b3063d10525544a0a57a4cfd4adc5bf6abfd35892e.scope/container/memory.events
Oct 10 23:55:22 np0005480824 podman[288701]: 2025-10-11 03:55:22.546574006 +0000 UTC m=+1.119491120 container died e66a7f236e3aacfd8997f3b3063d10525544a0a57a4cfd4adc5bf6abfd35892e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_driscoll, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 10 23:55:22 np0005480824 systemd[1]: var-lib-containers-storage-overlay-1817da2154ed52367e83686fc0c66a6986f1d8a224bd4088b4f50d61107b872f-merged.mount: Deactivated successfully.
Oct 10 23:55:22 np0005480824 podman[288701]: 2025-10-11 03:55:22.612365332 +0000 UTC m=+1.185282456 container remove e66a7f236e3aacfd8997f3b3063d10525544a0a57a4cfd4adc5bf6abfd35892e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_driscoll, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 23:55:22 np0005480824 systemd[1]: libpod-conmon-e66a7f236e3aacfd8997f3b3063d10525544a0a57a4cfd4adc5bf6abfd35892e.scope: Deactivated successfully.
Oct 10 23:55:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 134 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 2.8 MiB/s wr, 133 op/s
Oct 10 23:55:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:55:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3433365923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.896 2 DEBUG oslo_concurrency.processutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.926 2 DEBUG nova.virt.libvirt.vif [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:55:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-864862648',display_name='tempest-TestVolumeBootPattern-server-864862648',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-864862648',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHKqCtFesGlIN9DGdSuPEGCilj3bKmCIyQ2Hx4tQRLuRoOqWjhIRAgPC71aK1tfMSZbOh/7KRfo7uhOOwgBdYVdW77mjMG+sfmvlDoQnrLmEWQMeSschoC2XBAsdgkOOQ==',key_name='tempest-TestVolumeBootPattern-1691748970',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-rni1kdob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:55:18Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=ee0ba1fa-8740-4670-9f6d-b658f89f7f21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "address": "fa:16:3e:f7:bc:e9", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb363ce6-15", "ovs_interfaceid": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.927 2 DEBUG nova.network.os_vif_util [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "address": "fa:16:3e:f7:bc:e9", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb363ce6-15", "ovs_interfaceid": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.928 2 DEBUG nova.network.os_vif_util [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:bc:e9,bridge_name='br-int',has_traffic_filtering=True,id=eb363ce6-15fe-4b2a-a35e-06b06bbf4252,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb363ce6-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.930 2 DEBUG nova.objects.instance [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid ee0ba1fa-8740-4670-9f6d-b658f89f7f21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.945 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  <uuid>ee0ba1fa-8740-4670-9f6d-b658f89f7f21</uuid>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  <name>instance-00000012</name>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <nova:name>tempest-TestVolumeBootPattern-server-864862648</nova:name>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:55:22</nova:creationTime>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:        <nova:user uuid="38ebc503771e417aaf1f3aea0c835994">tempest-TestVolumeBootPattern-739984652-project-member</nova:user>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:        <nova:project uuid="55d21391a321476eb133317b3402b0f0">tempest-TestVolumeBootPattern-739984652</nova:project>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:        <nova:port uuid="eb363ce6-15fe-4b2a-a35e-06b06bbf4252">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <entry name="serial">ee0ba1fa-8740-4670-9f6d-b658f89f7f21</entry>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <entry name="uuid">ee0ba1fa-8740-4670-9f6d-b658f89f7f21</entry>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/ee0ba1fa-8740-4670-9f6d-b658f89f7f21_disk.config">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-df022fd8-30bb-4c20-bf5c-0866de956c6d">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <serial>df022fd8-30bb-4c20-bf5c-0866de956c6d</serial>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:f7:bc:e9"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <target dev="tapeb363ce6-15"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/ee0ba1fa-8740-4670-9f6d-b658f89f7f21/console.log" append="off"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:55:22 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:55:22 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:55:22 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:55:22 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.946 2 DEBUG nova.compute.manager [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Preparing to wait for external event network-vif-plugged-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.946 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.947 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.947 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.948 2 DEBUG nova.virt.libvirt.vif [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:55:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-864862648',display_name='tempest-TestVolumeBootPattern-server-864862648',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-864862648',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHKqCtFesGlIN9DGdSuPEGCilj3bKmCIyQ2Hx4tQRLuRoOqWjhIRAgPC71aK1tfMSZbOh/7KRfo7uhOOwgBdYVdW77mjMG+sfmvlDoQnrLmEWQMeSschoC2XBAsdgkOOQ==',key_name='tempest-TestVolumeBootPattern-1691748970',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-rni1kdob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:55:18Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=ee0ba1fa-8740-4670-9f6d-b658f89f7f21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "address": "fa:16:3e:f7:bc:e9", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb363ce6-15", "ovs_interfaceid": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.948 2 DEBUG nova.network.os_vif_util [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "address": "fa:16:3e:f7:bc:e9", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb363ce6-15", "ovs_interfaceid": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.949 2 DEBUG nova.network.os_vif_util [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:bc:e9,bridge_name='br-int',has_traffic_filtering=True,id=eb363ce6-15fe-4b2a-a35e-06b06bbf4252,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb363ce6-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.949 2 DEBUG os_vif [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:bc:e9,bridge_name='br-int',has_traffic_filtering=True,id=eb363ce6-15fe-4b2a-a35e-06b06bbf4252,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb363ce6-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.950 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.951 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.955 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb363ce6-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.956 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeb363ce6-15, col_values=(('external_ids', {'iface-id': 'eb363ce6-15fe-4b2a-a35e-06b06bbf4252', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:bc:e9', 'vm-uuid': 'ee0ba1fa-8740-4670-9f6d-b658f89f7f21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:55:22 np0005480824 nova_compute[260089]: 2025-10-11 03:55:22.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:55:23 np0005480824 NetworkManager[44969]: <info>  [1760154923.0026] manager: (tapeb363ce6-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.012 2 INFO os_vif [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:bc:e9,bridge_name='br-int',has_traffic_filtering=True,id=eb363ce6-15fe-4b2a-a35e-06b06bbf4252,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb363ce6-15')#033[00m
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.091 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.092 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.092 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No VIF found with MAC fa:16:3e:f7:bc:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.092 2 INFO nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Using config drive#033[00m
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.122 2 DEBUG nova.storage.rbd_utils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image ee0ba1fa-8740-4670-9f6d-b658f89f7f21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.506 2 DEBUG nova.network.neutron [req-64915f5a-4de1-4d1c-8639-0443c5a53dbf req-0ef6a76f-7d15-4dee-ae1b-f972cf5f9e97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Updated VIF entry in instance network info cache for port eb363ce6-15fe-4b2a-a35e-06b06bbf4252. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.507 2 DEBUG nova.network.neutron [req-64915f5a-4de1-4d1c-8639-0443c5a53dbf req-0ef6a76f-7d15-4dee-ae1b-f972cf5f9e97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Updating instance_info_cache with network_info: [{"id": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "address": "fa:16:3e:f7:bc:e9", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb363ce6-15", "ovs_interfaceid": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.520 2 INFO nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Creating config drive at /var/lib/nova/instances/ee0ba1fa-8740-4670-9f6d-b658f89f7f21/disk.config#033[00m
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.532 2 DEBUG oslo_concurrency.processutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ee0ba1fa-8740-4670-9f6d-b658f89f7f21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm2mdyaa9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.567 2 DEBUG oslo_concurrency.lockutils [req-64915f5a-4de1-4d1c-8639-0443c5a53dbf req-0ef6a76f-7d15-4dee-ae1b-f972cf5f9e97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-ee0ba1fa-8740-4670-9f6d-b658f89f7f21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:55:23 np0005480824 podman[288942]: 2025-10-11 03:55:23.621462611 +0000 UTC m=+0.064881846 container create 8387b2fb7ebcb01e57dd2d2110c0c85304d0177d7bbef307bc00fb6f39329be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.675 2 DEBUG oslo_concurrency.processutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ee0ba1fa-8740-4670-9f6d-b658f89f7f21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm2mdyaa9" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:23 np0005480824 podman[288942]: 2025-10-11 03:55:23.589742131 +0000 UTC m=+0.033161456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:55:23 np0005480824 systemd[1]: Started libpod-conmon-8387b2fb7ebcb01e57dd2d2110c0c85304d0177d7bbef307bc00fb6f39329be2.scope.
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.716 2 DEBUG nova.storage.rbd_utils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image ee0ba1fa-8740-4670-9f6d-b658f89f7f21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:55:23 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.721 2 DEBUG oslo_concurrency.processutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ee0ba1fa-8740-4670-9f6d-b658f89f7f21/disk.config ee0ba1fa-8740-4670-9f6d-b658f89f7f21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:23 np0005480824 podman[288942]: 2025-10-11 03:55:23.743946887 +0000 UTC m=+0.187366192 container init 8387b2fb7ebcb01e57dd2d2110c0c85304d0177d7bbef307bc00fb6f39329be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:55:23 np0005480824 podman[288942]: 2025-10-11 03:55:23.760773074 +0000 UTC m=+0.204192339 container start 8387b2fb7ebcb01e57dd2d2110c0c85304d0177d7bbef307bc00fb6f39329be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 10 23:55:23 np0005480824 podman[288942]: 2025-10-11 03:55:23.766462819 +0000 UTC m=+0.209882134 container attach 8387b2fb7ebcb01e57dd2d2110c0c85304d0177d7bbef307bc00fb6f39329be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:55:23 np0005480824 determined_bouman[288962]: 167 167
Oct 10 23:55:23 np0005480824 systemd[1]: libpod-8387b2fb7ebcb01e57dd2d2110c0c85304d0177d7bbef307bc00fb6f39329be2.scope: Deactivated successfully.
Oct 10 23:55:23 np0005480824 podman[288942]: 2025-10-11 03:55:23.772079062 +0000 UTC m=+0.215498327 container died 8387b2fb7ebcb01e57dd2d2110c0c85304d0177d7bbef307bc00fb6f39329be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 10 23:55:23 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a4627af4bee1187030a22f34fdef9c09f42d0bbb23088700be7b171230db630a-merged.mount: Deactivated successfully.
Oct 10 23:55:23 np0005480824 podman[288942]: 2025-10-11 03:55:23.825154877 +0000 UTC m=+0.268574112 container remove 8387b2fb7ebcb01e57dd2d2110c0c85304d0177d7bbef307bc00fb6f39329be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 10 23:55:23 np0005480824 systemd[1]: libpod-conmon-8387b2fb7ebcb01e57dd2d2110c0c85304d0177d7bbef307bc00fb6f39329be2.scope: Deactivated successfully.
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.953 2 DEBUG oslo_concurrency.processutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ee0ba1fa-8740-4670-9f6d-b658f89f7f21/disk.config ee0ba1fa-8740-4670-9f6d-b658f89f7f21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.954 2 INFO nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Deleting local config drive /var/lib/nova/instances/ee0ba1fa-8740-4670-9f6d-b658f89f7f21/disk.config because it was imported into RBD.#033[00m
Oct 10 23:55:23 np0005480824 nova_compute[260089]: 2025-10-11 03:55:23.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:24 np0005480824 kernel: tapeb363ce6-15: entered promiscuous mode
Oct 10 23:55:24 np0005480824 NetworkManager[44969]: <info>  [1760154924.0510] manager: (tapeb363ce6-15): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Oct 10 23:55:24 np0005480824 ovn_controller[152667]: 2025-10-11T03:55:24Z|00165|binding|INFO|Claiming lport eb363ce6-15fe-4b2a-a35e-06b06bbf4252 for this chassis.
Oct 10 23:55:24 np0005480824 ovn_controller[152667]: 2025-10-11T03:55:24Z|00166|binding|INFO|eb363ce6-15fe-4b2a-a35e-06b06bbf4252: Claiming fa:16:3e:f7:bc:e9 10.100.0.14
Oct 10 23:55:24 np0005480824 nova_compute[260089]: 2025-10-11 03:55:24.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.113 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:bc:e9 10.100.0.14'], port_security=['fa:16:3e:f7:bc:e9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ee0ba1fa-8740-4670-9f6d-b658f89f7f21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d21391a321476eb133317b3402b0f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '48328b99-2dfb-4da6-bd97-8cd4f810b350', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d98e64fb-092d-4777-b741-426f3e849bc3, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=eb363ce6-15fe-4b2a-a35e-06b06bbf4252) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.116 162245 INFO neutron.agent.ovn.metadata.agent [-] Port eb363ce6-15fe-4b2a-a35e-06b06bbf4252 in datapath 359720eb-a957-4bcd-b9b2-3cf7dad947e4 bound to our chassis#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.119 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 359720eb-a957-4bcd-b9b2-3cf7dad947e4#033[00m
Oct 10 23:55:24 np0005480824 systemd-udevd[289046]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.140 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[92b83dc7-1d71-4ca0-9f13-ff95ff37b464]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.141 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap359720eb-a1 in ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.144 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap359720eb-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.145 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9a062c45-caf7-443c-a491-3b8624e1d0e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.146 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7954948f-d5ec-4d7a-a16b-96341e6548cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 podman[289026]: 2025-10-11 03:55:24.154085764 +0000 UTC m=+0.129980554 container create 20e82571fef358888331f6852275dd851842c5dc595623d3592734170665da38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dewdney, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:55:24 np0005480824 NetworkManager[44969]: <info>  [1760154924.1550] device (tapeb363ce6-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:55:24 np0005480824 NetworkManager[44969]: <info>  [1760154924.1561] device (tapeb363ce6-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.167 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[32b0e81d-0941-4e71-828e-5600fe448b80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 systemd-machined[215071]: New machine qemu-18-instance-00000012.
Oct 10 23:55:24 np0005480824 nova_compute[260089]: 2025-10-11 03:55:24.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:24 np0005480824 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.185 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7f519978-a315-4130-9051-c0f1d4b2785b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 ovn_controller[152667]: 2025-10-11T03:55:24Z|00167|binding|INFO|Setting lport eb363ce6-15fe-4b2a-a35e-06b06bbf4252 ovn-installed in OVS
Oct 10 23:55:24 np0005480824 ovn_controller[152667]: 2025-10-11T03:55:24Z|00168|binding|INFO|Setting lport eb363ce6-15fe-4b2a-a35e-06b06bbf4252 up in Southbound
Oct 10 23:55:24 np0005480824 nova_compute[260089]: 2025-10-11 03:55:24.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:24 np0005480824 systemd[1]: Started libpod-conmon-20e82571fef358888331f6852275dd851842c5dc595623d3592734170665da38.scope.
Oct 10 23:55:24 np0005480824 podman[289026]: 2025-10-11 03:55:24.124708499 +0000 UTC m=+0.100603299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.230 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[a73295bf-473f-4398-b363-8779690724f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:55:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41822a0687877848e0a2cdd84a262456241f50a77deaa82d770fcef6fb5c08a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.240 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c84a3c99-3bf0-4b50-9a20-9b843e98ab84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41822a0687877848e0a2cdd84a262456241f50a77deaa82d770fcef6fb5c08a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41822a0687877848e0a2cdd84a262456241f50a77deaa82d770fcef6fb5c08a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:24 np0005480824 systemd-udevd[289052]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:55:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41822a0687877848e0a2cdd84a262456241f50a77deaa82d770fcef6fb5c08a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:24 np0005480824 podman[289026]: 2025-10-11 03:55:24.258024711 +0000 UTC m=+0.233919541 container init 20e82571fef358888331f6852275dd851842c5dc595623d3592734170665da38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dewdney, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:55:24 np0005480824 NetworkManager[44969]: <info>  [1760154924.2686] manager: (tap359720eb-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/95)
Oct 10 23:55:24 np0005480824 podman[289026]: 2025-10-11 03:55:24.281260291 +0000 UTC m=+0.257155081 container start 20e82571fef358888331f6852275dd851842c5dc595623d3592734170665da38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dewdney, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 10 23:55:24 np0005480824 podman[289026]: 2025-10-11 03:55:24.285778848 +0000 UTC m=+0.261673688 container attach 20e82571fef358888331f6852275dd851842c5dc595623d3592734170665da38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dewdney, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.299 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[3364acc9-b5aa-47f1-ae4d-d0dbbde15743]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.303 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[69cda33d-5a94-46ae-95e6-31d89eb2a0fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 NetworkManager[44969]: <info>  [1760154924.3347] device (tap359720eb-a0): carrier: link connected
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.339 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[57a19d32-e08a-42b0-8e80-6429c61ab820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.361 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[89e43bf1-670c-4c1c-b296-62bdd8c95955]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap359720eb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:90:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439199, 'reachable_time': 19526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289089, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.391 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[af195c1a-c884-47d8-8130-9aefb08e9c68]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:90b3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439199, 'tstamp': 439199}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289091, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.422 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[294ba425-44af-429f-a888-fab88929d69a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap359720eb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:90:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439199, 'reachable_time': 19526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289109, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.465 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7b4be9-7bdf-40d5-abe3-89b3aa9d1fa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.544 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ea784d-a037-4d33-b253-463e2bdfb6fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.547 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359720eb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.547 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.548 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap359720eb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:55:24 np0005480824 nova_compute[260089]: 2025-10-11 03:55:24.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:24 np0005480824 NetworkManager[44969]: <info>  [1760154924.5516] manager: (tap359720eb-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Oct 10 23:55:24 np0005480824 kernel: tap359720eb-a0: entered promiscuous mode
Oct 10 23:55:24 np0005480824 nova_compute[260089]: 2025-10-11 03:55:24.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.555 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap359720eb-a0, col_values=(('external_ids', {'iface-id': '039c7668-0b85-4466-9c66-62531405028d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:55:24 np0005480824 nova_compute[260089]: 2025-10-11 03:55:24.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:24 np0005480824 ovn_controller[152667]: 2025-10-11T03:55:24Z|00169|binding|INFO|Releasing lport 039c7668-0b85-4466-9c66-62531405028d from this chassis (sb_readonly=0)
Oct 10 23:55:24 np0005480824 nova_compute[260089]: 2025-10-11 03:55:24.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.582 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.583 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0d96c6-1a3d-407d-9b7e-ee6a1fde95ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.588 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-359720eb-a957-4bcd-b9b2-3cf7dad947e4
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 359720eb-a957-4bcd-b9b2-3cf7dad947e4
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:55:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:24.589 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'env', 'PROCESS_TAG=haproxy-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/359720eb-a957-4bcd-b9b2-3cf7dad947e4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:55:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:55:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2780161109' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:55:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:55:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2780161109' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:55:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 134 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 2.7 MiB/s wr, 126 op/s
Oct 10 23:55:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:55:24 np0005480824 nova_compute[260089]: 2025-10-11 03:55:24.904 2 DEBUG nova.compute.manager [req-a8b322e1-b2fc-4e9f-816c-2f00e1c8b514 req-4a0761c0-3dd3-4999-950b-fdacb1b21674 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Received event network-vif-plugged-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:55:24 np0005480824 nova_compute[260089]: 2025-10-11 03:55:24.904 2 DEBUG oslo_concurrency.lockutils [req-a8b322e1-b2fc-4e9f-816c-2f00e1c8b514 req-4a0761c0-3dd3-4999-950b-fdacb1b21674 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:24 np0005480824 nova_compute[260089]: 2025-10-11 03:55:24.904 2 DEBUG oslo_concurrency.lockutils [req-a8b322e1-b2fc-4e9f-816c-2f00e1c8b514 req-4a0761c0-3dd3-4999-950b-fdacb1b21674 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:24 np0005480824 nova_compute[260089]: 2025-10-11 03:55:24.905 2 DEBUG oslo_concurrency.lockutils [req-a8b322e1-b2fc-4e9f-816c-2f00e1c8b514 req-4a0761c0-3dd3-4999-950b-fdacb1b21674 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:24 np0005480824 nova_compute[260089]: 2025-10-11 03:55:24.905 2 DEBUG nova.compute.manager [req-a8b322e1-b2fc-4e9f-816c-2f00e1c8b514 req-4a0761c0-3dd3-4999-950b-fdacb1b21674 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Processing event network-vif-plugged-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:55:24 np0005480824 podman[289165]: 2025-10-11 03:55:24.998954009 +0000 UTC m=+0.067004154 container create 529ef65ecb09e3de63fd5d15f3cabe1b5b6395dbe514a8bbd588a7caa074468e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.019 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154925.018154, ee0ba1fa-8740-4670-9f6d-b658f89f7f21 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.020 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] VM Started (Lifecycle Event)#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.023 2 DEBUG nova.compute.manager [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.028 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.034 2 INFO nova.virt.libvirt.driver [-] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Instance spawned successfully.#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.035 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.039 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:55:25 np0005480824 podman[289165]: 2025-10-11 03:55:24.961515885 +0000 UTC m=+0.029566040 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:55:25 np0005480824 systemd[1]: Started libpod-conmon-529ef65ecb09e3de63fd5d15f3cabe1b5b6395dbe514a8bbd588a7caa074468e.scope.
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.058 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.063 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.063 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.064 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.064 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.065 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.065 2 DEBUG nova.virt.libvirt.driver [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.086 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.088 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154925.0184963, ee0ba1fa-8740-4670-9f6d-b658f89f7f21 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.088 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:55:25 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:55:25 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4382972424299ebfa7b43754369e444883c481e7f606c3e85ba442760976a8b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.112 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.120 2 INFO nova.compute.manager [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Took 5.10 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.121 2 DEBUG nova.compute.manager [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.124 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154925.0264223, ee0ba1fa-8740-4670-9f6d-b658f89f7f21 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.124 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:55:25 np0005480824 podman[289165]: 2025-10-11 03:55:25.129341563 +0000 UTC m=+0.197391718 container init 529ef65ecb09e3de63fd5d15f3cabe1b5b6395dbe514a8bbd588a7caa074468e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:55:25 np0005480824 podman[289165]: 2025-10-11 03:55:25.135186691 +0000 UTC m=+0.203236836 container start 529ef65ecb09e3de63fd5d15f3cabe1b5b6395dbe514a8bbd588a7caa074468e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.143 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.147 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:55:25 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[289191]: [NOTICE]   (289201) : New worker (289205) forked
Oct 10 23:55:25 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[289191]: [NOTICE]   (289201) : Loading success.
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.179 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.196 2 INFO nova.compute.manager [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Took 7.24 seconds to build instance.#033[00m
Oct 10 23:55:25 np0005480824 nova_compute[260089]: 2025-10-11 03:55:25.216 2 DEBUG oslo_concurrency.lockutils [None req-dbd77ec2-c584-4d59-960f-fc62d3acad20 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]: {
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "osd_id": 0,
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "type": "bluestore"
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:    },
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "osd_id": 1,
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "type": "bluestore"
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:    },
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "osd_id": 2,
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:        "type": "bluestore"
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]:    }
Oct 10 23:55:25 np0005480824 elated_dewdney[289059]: }
Oct 10 23:55:25 np0005480824 systemd[1]: libpod-20e82571fef358888331f6852275dd851842c5dc595623d3592734170665da38.scope: Deactivated successfully.
Oct 10 23:55:25 np0005480824 systemd[1]: libpod-20e82571fef358888331f6852275dd851842c5dc595623d3592734170665da38.scope: Consumed 1.023s CPU time.
Oct 10 23:55:25 np0005480824 podman[289026]: 2025-10-11 03:55:25.321012534 +0000 UTC m=+1.296907334 container died 20e82571fef358888331f6852275dd851842c5dc595623d3592734170665da38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:55:25 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e41822a0687877848e0a2cdd84a262456241f50a77deaa82d770fcef6fb5c08a-merged.mount: Deactivated successfully.
Oct 10 23:55:25 np0005480824 podman[289026]: 2025-10-11 03:55:25.376939667 +0000 UTC m=+1.352834447 container remove 20e82571fef358888331f6852275dd851842c5dc595623d3592734170665da38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:55:25 np0005480824 systemd[1]: libpod-conmon-20e82571fef358888331f6852275dd851842c5dc595623d3592734170665da38.scope: Deactivated successfully.
Oct 10 23:55:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:55:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:55:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:55:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:55:25 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 93265bb9-eb10-41ee-ba1c-f5da0bcf540a does not exist
Oct 10 23:55:25 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev a008d2a1-688e-4689-ad40-eb4040504663 does not exist
Oct 10 23:55:26 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:55:26 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:55:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:55:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/310948344' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:55:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:55:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/310948344' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:55:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 134 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.2 KiB/s wr, 53 op/s
Oct 10 23:55:27 np0005480824 nova_compute[260089]: 2025-10-11 03:55:27.000 2 DEBUG nova.compute.manager [req-dc54d80e-b1d5-457b-b0a9-706e5f0cadb7 req-65d82fb8-6f30-4b2c-8ac8-71b623750026 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Received event network-vif-plugged-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:55:27 np0005480824 nova_compute[260089]: 2025-10-11 03:55:27.002 2 DEBUG oslo_concurrency.lockutils [req-dc54d80e-b1d5-457b-b0a9-706e5f0cadb7 req-65d82fb8-6f30-4b2c-8ac8-71b623750026 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:27 np0005480824 nova_compute[260089]: 2025-10-11 03:55:27.004 2 DEBUG oslo_concurrency.lockutils [req-dc54d80e-b1d5-457b-b0a9-706e5f0cadb7 req-65d82fb8-6f30-4b2c-8ac8-71b623750026 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:27 np0005480824 nova_compute[260089]: 2025-10-11 03:55:27.004 2 DEBUG oslo_concurrency.lockutils [req-dc54d80e-b1d5-457b-b0a9-706e5f0cadb7 req-65d82fb8-6f30-4b2c-8ac8-71b623750026 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:27 np0005480824 nova_compute[260089]: 2025-10-11 03:55:27.005 2 DEBUG nova.compute.manager [req-dc54d80e-b1d5-457b-b0a9-706e5f0cadb7 req-65d82fb8-6f30-4b2c-8ac8-71b623750026 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] No waiting events found dispatching network-vif-plugged-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:55:27 np0005480824 nova_compute[260089]: 2025-10-11 03:55:27.006 2 WARNING nova.compute.manager [req-dc54d80e-b1d5-457b-b0a9-706e5f0cadb7 req-65d82fb8-6f30-4b2c-8ac8-71b623750026 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Received unexpected event network-vif-plugged-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:55:27 np0005480824 NetworkManager[44969]: <info>  [1760154927.8043] manager: (patch-br-int-to-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Oct 10 23:55:27 np0005480824 NetworkManager[44969]: <info>  [1760154927.8061] manager: (patch-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Oct 10 23:55:27 np0005480824 nova_compute[260089]: 2025-10-11 03:55:27.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:55:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:55:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:55:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:55:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:55:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:55:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:55:27
Oct 10 23:55:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:55:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:55:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'backups', 'images', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes']
Oct 10 23:55:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:55:27 np0005480824 nova_compute[260089]: 2025-10-11 03:55:27.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:27 np0005480824 ovn_controller[152667]: 2025-10-11T03:55:27Z|00170|binding|INFO|Releasing lport 039c7668-0b85-4466-9c66-62531405028d from this chassis (sb_readonly=0)
Oct 10 23:55:27 np0005480824 nova_compute[260089]: 2025-10-11 03:55:27.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:27 np0005480824 nova_compute[260089]: 2025-10-11 03:55:27.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:55:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:55:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:55:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:55:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:55:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:55:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:55:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:55:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:55:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:55:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Oct 10 23:55:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Oct 10 23:55:28 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Oct 10 23:55:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 134 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 24 KiB/s wr, 280 op/s
Oct 10 23:55:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:55:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1943079549' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:55:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:55:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1943079549' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:55:29 np0005480824 nova_compute[260089]: 2025-10-11 03:55:29.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:29 np0005480824 nova_compute[260089]: 2025-10-11 03:55:29.078 2 DEBUG nova.compute.manager [req-e16dd050-2952-46bc-adb4-3bd69016f57b req-558e75ff-3c4b-4156-a9d7-3b7ded9c0a93 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Received event network-changed-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:55:29 np0005480824 nova_compute[260089]: 2025-10-11 03:55:29.078 2 DEBUG nova.compute.manager [req-e16dd050-2952-46bc-adb4-3bd69016f57b req-558e75ff-3c4b-4156-a9d7-3b7ded9c0a93 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Refreshing instance network info cache due to event network-changed-eb363ce6-15fe-4b2a-a35e-06b06bbf4252. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:55:29 np0005480824 nova_compute[260089]: 2025-10-11 03:55:29.079 2 DEBUG oslo_concurrency.lockutils [req-e16dd050-2952-46bc-adb4-3bd69016f57b req-558e75ff-3c4b-4156-a9d7-3b7ded9c0a93 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-ee0ba1fa-8740-4670-9f6d-b658f89f7f21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:55:29 np0005480824 nova_compute[260089]: 2025-10-11 03:55:29.079 2 DEBUG oslo_concurrency.lockutils [req-e16dd050-2952-46bc-adb4-3bd69016f57b req-558e75ff-3c4b-4156-a9d7-3b7ded9c0a93 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-ee0ba1fa-8740-4670-9f6d-b658f89f7f21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:55:29 np0005480824 nova_compute[260089]: 2025-10-11 03:55:29.079 2 DEBUG nova.network.neutron [req-e16dd050-2952-46bc-adb4-3bd69016f57b req-558e75ff-3c4b-4156-a9d7-3b7ded9c0a93 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Refreshing network info cache for port eb363ce6-15fe-4b2a-a35e-06b06bbf4252 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:55:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:55:30 np0005480824 podman[289288]: 2025-10-11 03:55:30.04973837 +0000 UTC m=+0.100736113 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, tcib_managed=true)
Oct 10 23:55:30 np0005480824 podman[289287]: 2025-10-11 03:55:30.04974881 +0000 UTC m=+0.087903500 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 23:55:30 np0005480824 nova_compute[260089]: 2025-10-11 03:55:30.093 2 DEBUG nova.network.neutron [req-e16dd050-2952-46bc-adb4-3bd69016f57b req-558e75ff-3c4b-4156-a9d7-3b7ded9c0a93 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Updated VIF entry in instance network info cache for port eb363ce6-15fe-4b2a-a35e-06b06bbf4252. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:55:30 np0005480824 nova_compute[260089]: 2025-10-11 03:55:30.093 2 DEBUG nova.network.neutron [req-e16dd050-2952-46bc-adb4-3bd69016f57b req-558e75ff-3c4b-4156-a9d7-3b7ded9c0a93 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Updating instance_info_cache with network_info: [{"id": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "address": "fa:16:3e:f7:bc:e9", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb363ce6-15", "ovs_interfaceid": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:55:30 np0005480824 nova_compute[260089]: 2025-10-11 03:55:30.121 2 DEBUG oslo_concurrency.lockutils [req-e16dd050-2952-46bc-adb4-3bd69016f57b req-558e75ff-3c4b-4156-a9d7-3b7ded9c0a93 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-ee0ba1fa-8740-4670-9f6d-b658f89f7f21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:55:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Oct 10 23:55:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Oct 10 23:55:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Oct 10 23:55:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 134 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 22 KiB/s wr, 227 op/s
Oct 10 23:55:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:55:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2248633228' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:55:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:55:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2248633228' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:55:31 np0005480824 nova_compute[260089]: 2025-10-11 03:55:31.309 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:55:32 np0005480824 nova_compute[260089]: 2025-10-11 03:55:32.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:55:32 np0005480824 nova_compute[260089]: 2025-10-11 03:55:32.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:55:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:55:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3134252554' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:55:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:55:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3134252554' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:55:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 134 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 24 KiB/s wr, 354 op/s
Oct 10 23:55:33 np0005480824 nova_compute[260089]: 2025-10-11 03:55:33.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:33 np0005480824 nova_compute[260089]: 2025-10-11 03:55:33.292 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:55:34 np0005480824 nova_compute[260089]: 2025-10-11 03:55:34.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:55:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/525397728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:55:34 np0005480824 nova_compute[260089]: 2025-10-11 03:55:34.315 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:55:34 np0005480824 nova_compute[260089]: 2025-10-11 03:55:34.315 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:55:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Oct 10 23:55:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Oct 10 23:55:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Oct 10 23:55:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 134 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.1 KiB/s wr, 157 op/s
Oct 10 23:55:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:55:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Oct 10 23:55:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Oct 10 23:55:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Oct 10 23:55:35 np0005480824 nova_compute[260089]: 2025-10-11 03:55:35.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:55:35 np0005480824 nova_compute[260089]: 2025-10-11 03:55:35.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:55:35 np0005480824 nova_compute[260089]: 2025-10-11 03:55:35.376 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 23:55:35 np0005480824 nova_compute[260089]: 2025-10-11 03:55:35.377 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:55:35 np0005480824 nova_compute[260089]: 2025-10-11 03:55:35.406 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:35 np0005480824 nova_compute[260089]: 2025-10-11 03:55:35.406 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:35 np0005480824 nova_compute[260089]: 2025-10-11 03:55:35.407 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:35 np0005480824 nova_compute[260089]: 2025-10-11 03:55:35.407 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:55:35 np0005480824 nova_compute[260089]: 2025-10-11 03:55:35.407 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:55:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108313895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:55:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Oct 10 23:55:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Oct 10 23:55:35 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Oct 10 23:55:35 np0005480824 nova_compute[260089]: 2025-10-11 03:55:35.884 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:35 np0005480824 nova_compute[260089]: 2025-10-11 03:55:35.958 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:55:35 np0005480824 nova_compute[260089]: 2025-10-11 03:55:35.959 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:55:36 np0005480824 nova_compute[260089]: 2025-10-11 03:55:36.164 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:55:36 np0005480824 nova_compute[260089]: 2025-10-11 03:55:36.165 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4283MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:55:36 np0005480824 nova_compute[260089]: 2025-10-11 03:55:36.165 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:36 np0005480824 nova_compute[260089]: 2025-10-11 03:55:36.166 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:36 np0005480824 nova_compute[260089]: 2025-10-11 03:55:36.239 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance ee0ba1fa-8740-4670-9f6d-b658f89f7f21 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:55:36 np0005480824 nova_compute[260089]: 2025-10-11 03:55:36.240 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:55:36 np0005480824 nova_compute[260089]: 2025-10-11 03:55:36.240 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:55:36 np0005480824 nova_compute[260089]: 2025-10-11 03:55:36.278 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:55:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1319295462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:55:36 np0005480824 nova_compute[260089]: 2025-10-11 03:55:36.751 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:36 np0005480824 nova_compute[260089]: 2025-10-11 03:55:36.758 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:55:36 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 10 23:55:36 np0005480824 nova_compute[260089]: 2025-10-11 03:55:36.778 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:55:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 134 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.3 KiB/s wr, 169 op/s
Oct 10 23:55:36 np0005480824 nova_compute[260089]: 2025-10-11 03:55:36.797 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:55:36 np0005480824 nova_compute[260089]: 2025-10-11 03:55:36.798 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:37 np0005480824 podman[289369]: 2025-10-11 03:55:37.080538723 +0000 UTC m=+0.142199702 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 23:55:37 np0005480824 nova_compute[260089]: 2025-10-11 03:55:37.718 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:55:38 np0005480824 nova_compute[260089]: 2025-10-11 03:55:38.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006929479431204011 of space, bias 1.0, pg target 0.20788438293612033 quantized to 32 (current 32)
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:55:38 np0005480824 nova_compute[260089]: 2025-10-11 03:55:38.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:55:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 235 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 9.2 MiB/s wr, 220 op/s
Oct 10 23:55:39 np0005480824 nova_compute[260089]: 2025-10-11 03:55:39.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Oct 10 23:55:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Oct 10 23:55:39 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Oct 10 23:55:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:55:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2561756092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:55:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:55:40 np0005480824 ovn_controller[152667]: 2025-10-11T03:55:40Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:bc:e9 10.100.0.14
Oct 10 23:55:40 np0005480824 ovn_controller[152667]: 2025-10-11T03:55:40Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:bc:e9 10.100.0.14
Oct 10 23:55:40 np0005480824 nova_compute[260089]: 2025-10-11 03:55:40.293 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:55:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 235 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 9.2 MiB/s wr, 220 op/s
Oct 10 23:55:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Oct 10 23:55:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Oct 10 23:55:41 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Oct 10 23:55:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 306 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 6.8 MiB/s rd, 13 MiB/s wr, 377 op/s
Oct 10 23:55:43 np0005480824 podman[289395]: 2025-10-11 03:55:43.009720921 +0000 UTC m=+0.062262853 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 10 23:55:43 np0005480824 nova_compute[260089]: 2025-10-11 03:55:43.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:43.460 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:55:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:43.461 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:55:43 np0005480824 nova_compute[260089]: 2025-10-11 03:55:43.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:44 np0005480824 nova_compute[260089]: 2025-10-11 03:55:44.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Oct 10 23:55:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Oct 10 23:55:44 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Oct 10 23:55:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 306 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 214 op/s
Oct 10 23:55:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:55:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:55:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3874814313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:55:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:55:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3874814313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:55:46 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:46.464 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:55:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Oct 10 23:55:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Oct 10 23:55:46 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Oct 10 23:55:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 306 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 214 op/s
Oct 10 23:55:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:55:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3793882031' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:55:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:55:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3793882031' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:55:47 np0005480824 nova_compute[260089]: 2025-10-11 03:55:47.435 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Acquiring lock "0403d8e6-23d4-4765-a41f-eed96752c52e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:47 np0005480824 nova_compute[260089]: 2025-10-11 03:55:47.436 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:47 np0005480824 nova_compute[260089]: 2025-10-11 03:55:47.454 2 DEBUG nova.compute.manager [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:55:47 np0005480824 nova_compute[260089]: 2025-10-11 03:55:47.545 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:47 np0005480824 nova_compute[260089]: 2025-10-11 03:55:47.546 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:47 np0005480824 nova_compute[260089]: 2025-10-11 03:55:47.557 2 DEBUG nova.virt.hardware [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:55:47 np0005480824 nova_compute[260089]: 2025-10-11 03:55:47.557 2 INFO nova.compute.claims [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:55:47 np0005480824 nova_compute[260089]: 2025-10-11 03:55:47.708 2 DEBUG oslo_concurrency.processutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:55:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1362903611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.212 2 DEBUG oslo_concurrency.processutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.219 2 DEBUG nova.compute.provider_tree [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.261 2 DEBUG nova.scheduler.client.report [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.291 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.292 2 DEBUG nova.compute.manager [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.362 2 DEBUG nova.compute.manager [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.363 2 DEBUG nova.network.neutron [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.386 2 INFO nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:55:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:55:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3148268112' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:55:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:55:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3148268112' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.453 2 DEBUG nova.compute.manager [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.511 2 INFO nova.virt.block_device [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Booting with volume a2eeef68-6e07-491f-ba12-26f37ef87b28 at /dev/vda#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.616 2 DEBUG nova.policy [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ba815f7813ad434aa05e27f214de0632', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f36ed779ede42228be9ab8544bbf9aa', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.654 2 DEBUG os_brick.utils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.655 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.676 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.676 676 DEBUG oslo.privsep.daemon [-] privsep: reply[01589db9-5803-4223-9200-3ac382e8653b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.678 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.689 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.689 676 DEBUG oslo.privsep.daemon [-] privsep: reply[b30fee7f-2726-4e3b-bc64-a0f10db4c387]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.690 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.710 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.710 676 DEBUG oslo.privsep.daemon [-] privsep: reply[81ac5de1-9b3e-4f4a-89c8-bf7e04e0b4d8]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.712 676 DEBUG oslo.privsep.daemon [-] privsep: reply[50f1aaf9-6622-48a4-954d-9d2325336468]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.713 2 DEBUG oslo_concurrency.processutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.749 2 DEBUG oslo_concurrency.processutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.752 2 DEBUG os_brick.initiator.connectors.lightos [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.753 2 DEBUG os_brick.initiator.connectors.lightos [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.753 2 DEBUG os_brick.initiator.connectors.lightos [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.753 2 DEBUG os_brick.utils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:55:48 np0005480824 nova_compute[260089]: 2025-10-11 03:55:48.754 2 DEBUG nova.virt.block_device [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Updating existing volume attachment record: d6c4ec55-d21b-4fe2-adfc-ae2aa4f9c4fd _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:55:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 308 active+clean; 306 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.7 MiB/s wr, 312 op/s
Oct 10 23:55:49 np0005480824 nova_compute[260089]: 2025-10-11 03:55:49.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:49 np0005480824 nova_compute[260089]: 2025-10-11 03:55:49.253 2 DEBUG nova.network.neutron [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Successfully created port: 7a5e381a-dccf-47ae-a39a-87d08faf2a0f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:55:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:55:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/749709021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:55:49 np0005480824 nova_compute[260089]: 2025-10-11 03:55:49.746 2 DEBUG nova.compute.manager [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:55:49 np0005480824 nova_compute[260089]: 2025-10-11 03:55:49.748 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:55:49 np0005480824 nova_compute[260089]: 2025-10-11 03:55:49.748 2 INFO nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Creating image(s)#033[00m
Oct 10 23:55:49 np0005480824 nova_compute[260089]: 2025-10-11 03:55:49.749 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 10 23:55:49 np0005480824 nova_compute[260089]: 2025-10-11 03:55:49.749 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Ensure instance console log exists: /var/lib/nova/instances/0403d8e6-23d4-4765-a41f-eed96752c52e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:55:49 np0005480824 nova_compute[260089]: 2025-10-11 03:55:49.749 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:49 np0005480824 nova_compute[260089]: 2025-10-11 03:55:49.750 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:49 np0005480824 nova_compute[260089]: 2025-10-11 03:55:49.750 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:55:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Oct 10 23:55:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Oct 10 23:55:49 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Oct 10 23:55:49 np0005480824 nova_compute[260089]: 2025-10-11 03:55:49.978 2 DEBUG nova.network.neutron [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Successfully updated port: 7a5e381a-dccf-47ae-a39a-87d08faf2a0f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:55:49 np0005480824 nova_compute[260089]: 2025-10-11 03:55:49.997 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Acquiring lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:55:49 np0005480824 nova_compute[260089]: 2025-10-11 03:55:49.997 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Acquired lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:55:49 np0005480824 nova_compute[260089]: 2025-10-11 03:55:49.998 2 DEBUG nova.network.neutron [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:55:50 np0005480824 nova_compute[260089]: 2025-10-11 03:55:50.097 2 DEBUG nova.compute.manager [req-0752365c-f87f-4eea-941b-afef71fa4ea8 req-c481756b-1bf4-450d-ba2f-22fdd439a56e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Received event network-changed-7a5e381a-dccf-47ae-a39a-87d08faf2a0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:55:50 np0005480824 nova_compute[260089]: 2025-10-11 03:55:50.098 2 DEBUG nova.compute.manager [req-0752365c-f87f-4eea-941b-afef71fa4ea8 req-c481756b-1bf4-450d-ba2f-22fdd439a56e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Refreshing instance network info cache due to event network-changed-7a5e381a-dccf-47ae-a39a-87d08faf2a0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:55:50 np0005480824 nova_compute[260089]: 2025-10-11 03:55:50.098 2 DEBUG oslo_concurrency.lockutils [req-0752365c-f87f-4eea-941b-afef71fa4ea8 req-c481756b-1bf4-450d-ba2f-22fdd439a56e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:55:50 np0005480824 nova_compute[260089]: 2025-10-11 03:55:50.170 2 DEBUG nova.network.neutron [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:55:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 308 active+clean; 306 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 27 KiB/s wr, 160 op/s
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.073 2 DEBUG nova.network.neutron [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Updating instance_info_cache with network_info: [{"id": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "address": "fa:16:3e:02:fc:94", "network": {"id": "821e091d-e4da-4318-a5fb-3fc44a19fc25", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1425502666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f36ed779ede42228be9ab8544bbf9aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a5e381a-dc", "ovs_interfaceid": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.099 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Releasing lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.100 2 DEBUG nova.compute.manager [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Instance network_info: |[{"id": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "address": "fa:16:3e:02:fc:94", "network": {"id": "821e091d-e4da-4318-a5fb-3fc44a19fc25", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1425502666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f36ed779ede42228be9ab8544bbf9aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a5e381a-dc", "ovs_interfaceid": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.100 2 DEBUG oslo_concurrency.lockutils [req-0752365c-f87f-4eea-941b-afef71fa4ea8 req-c481756b-1bf4-450d-ba2f-22fdd439a56e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.100 2 DEBUG nova.network.neutron [req-0752365c-f87f-4eea-941b-afef71fa4ea8 req-c481756b-1bf4-450d-ba2f-22fdd439a56e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Refreshing network info cache for port 7a5e381a-dccf-47ae-a39a-87d08faf2a0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.103 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Start _get_guest_xml network_info=[{"id": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "address": "fa:16:3e:02:fc:94", "network": {"id": "821e091d-e4da-4318-a5fb-3fc44a19fc25", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1425502666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f36ed779ede42228be9ab8544bbf9aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a5e381a-dc", "ovs_interfaceid": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'd6c4ec55-d21b-4fe2-adfc-ae2aa4f9c4fd', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a2eeef68-6e07-491f-ba12-26f37ef87b28', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a2eeef68-6e07-491f-ba12-26f37ef87b28', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '0403d8e6-23d4-4765-a41f-eed96752c52e', 'attached_at': '', 'detached_at': '', 'volume_id': 'a2eeef68-6e07-491f-ba12-26f37ef87b28', 'serial': 'a2eeef68-6e07-491f-ba12-26f37ef87b28'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.107 2 WARNING nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.112 2 DEBUG nova.virt.libvirt.host [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.113 2 DEBUG nova.virt.libvirt.host [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.121 2 DEBUG nova.virt.libvirt.host [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.122 2 DEBUG nova.virt.libvirt.host [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.122 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.122 2 DEBUG nova.virt.hardware [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.123 2 DEBUG nova.virt.hardware [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.123 2 DEBUG nova.virt.hardware [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.123 2 DEBUG nova.virt.hardware [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.123 2 DEBUG nova.virt.hardware [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.124 2 DEBUG nova.virt.hardware [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.124 2 DEBUG nova.virt.hardware [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.124 2 DEBUG nova.virt.hardware [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.124 2 DEBUG nova.virt.hardware [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.125 2 DEBUG nova.virt.hardware [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.125 2 DEBUG nova.virt.hardware [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.154 2 DEBUG nova.storage.rbd_utils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] rbd image 0403d8e6-23d4-4765-a41f-eed96752c52e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.160 2 DEBUG oslo_concurrency.processutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:55:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2613420819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.591 2 DEBUG oslo_concurrency.processutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.618 2 DEBUG nova.virt.libvirt.vif [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:55:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1812578238',display_name='tempest-TestVolumeBackupRestore-server-1812578238',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1812578238',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcZqlWznkTSPt7YqL58kLpY6xBe8Ue8Yu9g5Fx6W9nijrGZzvH0hybC3ENKmePVhJj9AL8vstvMZEi4+ASaw20cil6ZF7IGGtP2ziwcq2zq7ghU3mbyjhm+18aIJfy/yQ==',key_name='tempest-TestVolumeBackupRestore-1282533698',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f36ed779ede42228be9ab8544bbf9aa',ramdisk_id='',reservation_id='r-03ovkgqk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1363850772',owner_user_name='tempest-TestVolumeBackupRestore-1363850772-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:55:48Z,user_data=None,user_id='ba815f7813ad434aa05e27f214de0632',uuid=0403d8e6-23d4-4765-a41f-eed96752c52e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "address": "fa:16:3e:02:fc:94", "network": {"id": "821e091d-e4da-4318-a5fb-3fc44a19fc25", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1425502666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f36ed779ede42228be9ab8544bbf9aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a5e381a-dc", "ovs_interfaceid": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.619 2 DEBUG nova.network.os_vif_util [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Converting VIF {"id": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "address": "fa:16:3e:02:fc:94", "network": {"id": "821e091d-e4da-4318-a5fb-3fc44a19fc25", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1425502666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f36ed779ede42228be9ab8544bbf9aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a5e381a-dc", "ovs_interfaceid": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.621 2 DEBUG nova.network.os_vif_util [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:fc:94,bridge_name='br-int',has_traffic_filtering=True,id=7a5e381a-dccf-47ae-a39a-87d08faf2a0f,network=Network(821e091d-e4da-4318-a5fb-3fc44a19fc25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a5e381a-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.623 2 DEBUG nova.objects.instance [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lazy-loading 'pci_devices' on Instance uuid 0403d8e6-23d4-4765-a41f-eed96752c52e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.640 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  <uuid>0403d8e6-23d4-4765-a41f-eed96752c52e</uuid>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  <name>instance-00000013</name>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <nova:name>tempest-TestVolumeBackupRestore-server-1812578238</nova:name>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:55:51</nova:creationTime>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:        <nova:user uuid="ba815f7813ad434aa05e27f214de0632">tempest-TestVolumeBackupRestore-1363850772-project-member</nova:user>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:        <nova:project uuid="5f36ed779ede42228be9ab8544bbf9aa">tempest-TestVolumeBackupRestore-1363850772</nova:project>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:        <nova:port uuid="7a5e381a-dccf-47ae-a39a-87d08faf2a0f">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <entry name="serial">0403d8e6-23d4-4765-a41f-eed96752c52e</entry>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <entry name="uuid">0403d8e6-23d4-4765-a41f-eed96752c52e</entry>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/0403d8e6-23d4-4765-a41f-eed96752c52e_disk.config">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-a2eeef68-6e07-491f-ba12-26f37ef87b28">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <serial>a2eeef68-6e07-491f-ba12-26f37ef87b28</serial>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:02:fc:94"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <target dev="tap7a5e381a-dc"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/0403d8e6-23d4-4765-a41f-eed96752c52e/console.log" append="off"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:55:51 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:55:51 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:55:51 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:55:51 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.641 2 DEBUG nova.compute.manager [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Preparing to wait for external event network-vif-plugged-7a5e381a-dccf-47ae-a39a-87d08faf2a0f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.642 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Acquiring lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.642 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.642 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.644 2 DEBUG nova.virt.libvirt.vif [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:55:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1812578238',display_name='tempest-TestVolumeBackupRestore-server-1812578238',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1812578238',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcZqlWznkTSPt7YqL58kLpY6xBe8Ue8Yu9g5Fx6W9nijrGZzvH0hybC3ENKmePVhJj9AL8vstvMZEi4+ASaw20cil6ZF7IGGtP2ziwcq2zq7ghU3mbyjhm+18aIJfy/yQ==',key_name='tempest-TestVolumeBackupRestore-1282533698',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f36ed779ede42228be9ab8544bbf9aa',ramdisk_id='',reservation_id='r-03ovkgqk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1363850772',owner_user_name='tempest-TestVolumeBackupRestore-1363850772-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:55:48Z,user_data=None,user_id='ba815f7813ad434aa05e27f214de0632',uuid=0403d8e6-23d4-4765-a41f-eed96752c52e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "address": "fa:16:3e:02:fc:94", "network": {"id": "821e091d-e4da-4318-a5fb-3fc44a19fc25", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1425502666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f36ed779ede42228be9ab8544bbf9aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a5e381a-dc", "ovs_interfaceid": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.645 2 DEBUG nova.network.os_vif_util [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Converting VIF {"id": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "address": "fa:16:3e:02:fc:94", "network": {"id": "821e091d-e4da-4318-a5fb-3fc44a19fc25", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1425502666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f36ed779ede42228be9ab8544bbf9aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a5e381a-dc", "ovs_interfaceid": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.646 2 DEBUG nova.network.os_vif_util [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:fc:94,bridge_name='br-int',has_traffic_filtering=True,id=7a5e381a-dccf-47ae-a39a-87d08faf2a0f,network=Network(821e091d-e4da-4318-a5fb-3fc44a19fc25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a5e381a-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.647 2 DEBUG os_vif [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:fc:94,bridge_name='br-int',has_traffic_filtering=True,id=7a5e381a-dccf-47ae-a39a-87d08faf2a0f,network=Network(821e091d-e4da-4318-a5fb-3fc44a19fc25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a5e381a-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.649 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a5e381a-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.658 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7a5e381a-dc, col_values=(('external_ids', {'iface-id': '7a5e381a-dccf-47ae-a39a-87d08faf2a0f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:fc:94', 'vm-uuid': '0403d8e6-23d4-4765-a41f-eed96752c52e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:51 np0005480824 NetworkManager[44969]: <info>  [1760154951.6614] manager: (tap7a5e381a-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.669 2 INFO os_vif [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:fc:94,bridge_name='br-int',has_traffic_filtering=True,id=7a5e381a-dccf-47ae-a39a-87d08faf2a0f,network=Network(821e091d-e4da-4318-a5fb-3fc44a19fc25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a5e381a-dc')#033[00m
Oct 10 23:55:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.901 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.901 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.902 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] No VIF found with MAC fa:16:3e:02:fc:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:55:51 np0005480824 nova_compute[260089]: 2025-10-11 03:55:51.902 2 INFO nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Using config drive#033[00m
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.082033) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154952082103, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2647, "num_deletes": 523, "total_data_size": 3357343, "memory_usage": 3424192, "flush_reason": "Manual Compaction"}
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 10 23:55:52 np0005480824 nova_compute[260089]: 2025-10-11 03:55:52.090 2 DEBUG nova.storage.rbd_utils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] rbd image 0403d8e6-23d4-4765-a41f-eed96752c52e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154952113774, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3299473, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26259, "largest_seqno": 28905, "table_properties": {"data_size": 3287534, "index_size": 7409, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 28931, "raw_average_key_size": 21, "raw_value_size": 3261550, "raw_average_value_size": 2384, "num_data_blocks": 321, "num_entries": 1368, "num_filter_entries": 1368, "num_deletions": 523, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154792, "oldest_key_time": 1760154792, "file_creation_time": 1760154952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 31798 microseconds, and 13418 cpu microseconds.
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.113834) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3299473 bytes OK
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.113858) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.117356) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.117380) EVENT_LOG_v1 {"time_micros": 1760154952117372, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.117404) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3344975, prev total WAL file size 3344975, number of live WAL files 2.
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.119014) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3222KB)], [59(10MB)]
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154952119093, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 13888485, "oldest_snapshot_seqno": -1}
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5678 keys, 8898943 bytes, temperature: kUnknown
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154952212928, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8898943, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8857385, "index_size": 26280, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 141883, "raw_average_key_size": 24, "raw_value_size": 8751569, "raw_average_value_size": 1541, "num_data_blocks": 1068, "num_entries": 5678, "num_filter_entries": 5678, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760154952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.213287) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8898943 bytes
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.215253) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.8 rd, 94.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.1 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(6.9) write-amplify(2.7) OK, records in: 6723, records dropped: 1045 output_compression: NoCompression
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.215287) EVENT_LOG_v1 {"time_micros": 1760154952215271, "job": 32, "event": "compaction_finished", "compaction_time_micros": 93949, "compaction_time_cpu_micros": 38946, "output_level": 6, "num_output_files": 1, "total_output_size": 8898943, "num_input_records": 6723, "num_output_records": 5678, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154952216485, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760154952220178, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.118839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.220277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.220285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.220287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.220289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:55:52 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:55:52.220291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:55:52 np0005480824 nova_compute[260089]: 2025-10-11 03:55:52.382 2 INFO nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Creating config drive at /var/lib/nova/instances/0403d8e6-23d4-4765-a41f-eed96752c52e/disk.config#033[00m
Oct 10 23:55:52 np0005480824 nova_compute[260089]: 2025-10-11 03:55:52.391 2 DEBUG oslo_concurrency.processutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0403d8e6-23d4-4765-a41f-eed96752c52e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp01qi0ymm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:52 np0005480824 nova_compute[260089]: 2025-10-11 03:55:52.525 2 DEBUG oslo_concurrency.processutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0403d8e6-23d4-4765-a41f-eed96752c52e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp01qi0ymm" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:52 np0005480824 nova_compute[260089]: 2025-10-11 03:55:52.552 2 DEBUG nova.storage.rbd_utils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] rbd image 0403d8e6-23d4-4765-a41f-eed96752c52e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:55:52 np0005480824 nova_compute[260089]: 2025-10-11 03:55:52.557 2 DEBUG oslo_concurrency.processutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0403d8e6-23d4-4765-a41f-eed96752c52e/disk.config 0403d8e6-23d4-4765-a41f-eed96752c52e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:55:52 np0005480824 nova_compute[260089]: 2025-10-11 03:55:52.721 2 DEBUG oslo_concurrency.processutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0403d8e6-23d4-4765-a41f-eed96752c52e/disk.config 0403d8e6-23d4-4765-a41f-eed96752c52e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:55:52 np0005480824 nova_compute[260089]: 2025-10-11 03:55:52.722 2 INFO nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Deleting local config drive /var/lib/nova/instances/0403d8e6-23d4-4765-a41f-eed96752c52e/disk.config because it was imported into RBD.#033[00m
Oct 10 23:55:52 np0005480824 kernel: tap7a5e381a-dc: entered promiscuous mode
Oct 10 23:55:52 np0005480824 NetworkManager[44969]: <info>  [1760154952.7872] manager: (tap7a5e381a-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Oct 10 23:55:52 np0005480824 ovn_controller[152667]: 2025-10-11T03:55:52Z|00171|binding|INFO|Claiming lport 7a5e381a-dccf-47ae-a39a-87d08faf2a0f for this chassis.
Oct 10 23:55:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 306 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 30 KiB/s wr, 210 op/s
Oct 10 23:55:52 np0005480824 ovn_controller[152667]: 2025-10-11T03:55:52Z|00172|binding|INFO|7a5e381a-dccf-47ae-a39a-87d08faf2a0f: Claiming fa:16:3e:02:fc:94 10.100.0.6
Oct 10 23:55:52 np0005480824 nova_compute[260089]: 2025-10-11 03:55:52.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:52.806 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:fc:94 10.100.0.6'], port_security=['fa:16:3e:02:fc:94 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0403d8e6-23d4-4765-a41f-eed96752c52e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-821e091d-e4da-4318-a5fb-3fc44a19fc25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f36ed779ede42228be9ab8544bbf9aa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '51c20c9a-eab4-4ea0-bd83-0a26e2278ce4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d951732-205c-4b86-802d-f010498bd0dc, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=7a5e381a-dccf-47ae-a39a-87d08faf2a0f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:55:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:52.810 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 7a5e381a-dccf-47ae-a39a-87d08faf2a0f in datapath 821e091d-e4da-4318-a5fb-3fc44a19fc25 bound to our chassis#033[00m
Oct 10 23:55:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:52.812 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 821e091d-e4da-4318-a5fb-3fc44a19fc25#033[00m
Oct 10 23:55:52 np0005480824 systemd-udevd[289554]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:55:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:52.830 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1979704d-cc04-4816-acd3-97e1dd72cc00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:52.831 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap821e091d-e1 in ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:55:52 np0005480824 ovn_controller[152667]: 2025-10-11T03:55:52Z|00173|binding|INFO|Setting lport 7a5e381a-dccf-47ae-a39a-87d08faf2a0f ovn-installed in OVS
Oct 10 23:55:52 np0005480824 ovn_controller[152667]: 2025-10-11T03:55:52Z|00174|binding|INFO|Setting lport 7a5e381a-dccf-47ae-a39a-87d08faf2a0f up in Southbound
Oct 10 23:55:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:52.835 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap821e091d-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:55:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:52.835 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[0112edd9-acf1-4c9a-91f3-e3d596b5e197]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:52 np0005480824 nova_compute[260089]: 2025-10-11 03:55:52.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:52 np0005480824 nova_compute[260089]: 2025-10-11 03:55:52.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:52 np0005480824 systemd-machined[215071]: New machine qemu-19-instance-00000013.
Oct 10 23:55:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:52.837 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[efba8427-2cd0-4510-8525-c4a48a02e060]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:52 np0005480824 NetworkManager[44969]: <info>  [1760154952.8461] device (tap7a5e381a-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:55:52 np0005480824 NetworkManager[44969]: <info>  [1760154952.8476] device (tap7a5e381a-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:55:52 np0005480824 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Oct 10 23:55:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:52.855 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[20ed1231-f1ad-442b-88a3-f1de390cb3c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:52.885 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[414caba8-f281-46bb-9c18-d434e9031b3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:52.921 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[c94dac66-5ece-40fb-b322-6585227c7602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:52.931 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b16d1cb2-055f-4398-9c53-1c83d8c5429b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:52 np0005480824 NetworkManager[44969]: <info>  [1760154952.9327] manager: (tap821e091d-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
Oct 10 23:55:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:52.961 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[61e28c42-24ca-4103-bcef-648fc6d2f154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:52.968 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[a5408f2b-2a09-4407-b998-2da68be04bad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:52 np0005480824 NetworkManager[44969]: <info>  [1760154952.9957] device (tap821e091d-e0): carrier: link connected
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:53.004 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee5031c-a9e2-445a-a2ae-c43617a74ab7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:53.028 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[281e7d14-cad2-415d-a4c8-698a7fb6120f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap821e091d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:4f:03'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442065, 'reachable_time': 22140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289588, 'error': None, 'target': 'ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:53.052 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c8cf6997-9afd-4c57-865c-b5a4ea4ded7a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:4f03'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442065, 'tstamp': 442065}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289589, 'error': None, 'target': 'ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:53.076 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[863aaa4d-160e-40cf-87c3-dbfefffa0eb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap821e091d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:4f:03'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442065, 'reachable_time': 22140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289590, 'error': None, 'target': 'ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:53.125 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2c0460-a946-4a21-be3d-3b8c512e6b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:53 np0005480824 nova_compute[260089]: 2025-10-11 03:55:53.190 2 DEBUG nova.network.neutron [req-0752365c-f87f-4eea-941b-afef71fa4ea8 req-c481756b-1bf4-450d-ba2f-22fdd439a56e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Updated VIF entry in instance network info cache for port 7a5e381a-dccf-47ae-a39a-87d08faf2a0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:55:53 np0005480824 nova_compute[260089]: 2025-10-11 03:55:53.190 2 DEBUG nova.network.neutron [req-0752365c-f87f-4eea-941b-afef71fa4ea8 req-c481756b-1bf4-450d-ba2f-22fdd439a56e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Updating instance_info_cache with network_info: [{"id": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "address": "fa:16:3e:02:fc:94", "network": {"id": "821e091d-e4da-4318-a5fb-3fc44a19fc25", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1425502666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f36ed779ede42228be9ab8544bbf9aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a5e381a-dc", "ovs_interfaceid": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:53.198 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[75af3049-ae98-427f-a61e-218a3f245a61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:53.201 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap821e091d-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:53.201 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:53.202 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap821e091d-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:55:53 np0005480824 NetworkManager[44969]: <info>  [1760154953.2044] manager: (tap821e091d-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Oct 10 23:55:53 np0005480824 kernel: tap821e091d-e0: entered promiscuous mode
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:53.207 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap821e091d-e0, col_values=(('external_ids', {'iface-id': '2ccf44c5-d4e9-4add-bd6a-5ddb77bf1038'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:55:53 np0005480824 nova_compute[260089]: 2025-10-11 03:55:53.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:53 np0005480824 ovn_controller[152667]: 2025-10-11T03:55:53Z|00175|binding|INFO|Releasing lport 2ccf44c5-d4e9-4add-bd6a-5ddb77bf1038 from this chassis (sb_readonly=0)
Oct 10 23:55:53 np0005480824 nova_compute[260089]: 2025-10-11 03:55:53.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:53 np0005480824 nova_compute[260089]: 2025-10-11 03:55:53.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:53.211 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/821e091d-e4da-4318-a5fb-3fc44a19fc25.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/821e091d-e4da-4318-a5fb-3fc44a19fc25.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:55:53 np0005480824 nova_compute[260089]: 2025-10-11 03:55:53.212 2 DEBUG oslo_concurrency.lockutils [req-0752365c-f87f-4eea-941b-afef71fa4ea8 req-c481756b-1bf4-450d-ba2f-22fdd439a56e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:53.220 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ea3ede-f3bc-458f-a395-37e02f0e1b9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:53.222 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-821e091d-e4da-4318-a5fb-3fc44a19fc25
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/821e091d-e4da-4318-a5fb-3fc44a19fc25.pid.haproxy
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 821e091d-e4da-4318-a5fb-3fc44a19fc25
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:55:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:55:53.224 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25', 'env', 'PROCESS_TAG=haproxy-821e091d-e4da-4318-a5fb-3fc44a19fc25', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/821e091d-e4da-4318-a5fb-3fc44a19fc25.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:55:53 np0005480824 nova_compute[260089]: 2025-10-11 03:55:53.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:53 np0005480824 nova_compute[260089]: 2025-10-11 03:55:53.382 2 DEBUG nova.compute.manager [req-e6b6daad-bb43-4a4d-bff1-62520e8bb89f req-3c40b1d2-27c3-4ff7-96de-eeaeaafad2d4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Received event network-vif-plugged-7a5e381a-dccf-47ae-a39a-87d08faf2a0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:55:53 np0005480824 nova_compute[260089]: 2025-10-11 03:55:53.395 2 DEBUG oslo_concurrency.lockutils [req-e6b6daad-bb43-4a4d-bff1-62520e8bb89f req-3c40b1d2-27c3-4ff7-96de-eeaeaafad2d4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:53 np0005480824 nova_compute[260089]: 2025-10-11 03:55:53.396 2 DEBUG oslo_concurrency.lockutils [req-e6b6daad-bb43-4a4d-bff1-62520e8bb89f req-3c40b1d2-27c3-4ff7-96de-eeaeaafad2d4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:53 np0005480824 nova_compute[260089]: 2025-10-11 03:55:53.396 2 DEBUG oslo_concurrency.lockutils [req-e6b6daad-bb43-4a4d-bff1-62520e8bb89f req-3c40b1d2-27c3-4ff7-96de-eeaeaafad2d4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:53 np0005480824 nova_compute[260089]: 2025-10-11 03:55:53.397 2 DEBUG nova.compute.manager [req-e6b6daad-bb43-4a4d-bff1-62520e8bb89f req-3c40b1d2-27c3-4ff7-96de-eeaeaafad2d4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Processing event network-vif-plugged-7a5e381a-dccf-47ae-a39a-87d08faf2a0f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:55:53 np0005480824 podman[289622]: 2025-10-11 03:55:53.645836668 +0000 UTC m=+0.029681032 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:54 np0005480824 podman[289622]: 2025-10-11 03:55:54.061041966 +0000 UTC m=+0.444886320 container create 251085e9bae4f21141988ab10428f8b7da218242b931bfb5deb524e692721ca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:55:54 np0005480824 systemd[1]: Started libpod-conmon-251085e9bae4f21141988ab10428f8b7da218242b931bfb5deb524e692721ca9.scope.
Oct 10 23:55:54 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:55:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baadaa790b11e77690d31b1cef87141affcae2d2cf863950a3fc300dcf869732/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:55:54 np0005480824 podman[289622]: 2025-10-11 03:55:54.202212454 +0000 UTC m=+0.586056828 container init 251085e9bae4f21141988ab10428f8b7da218242b931bfb5deb524e692721ca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 10 23:55:54 np0005480824 podman[289622]: 2025-10-11 03:55:54.208251196 +0000 UTC m=+0.592095530 container start 251085e9bae4f21141988ab10428f8b7da218242b931bfb5deb524e692721ca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 10 23:55:54 np0005480824 neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25[289675]: [NOTICE]   (289683) : New worker (289685) forked
Oct 10 23:55:54 np0005480824 neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25[289675]: [NOTICE]   (289683) : Loading success.
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.715 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154954.7148547, 0403d8e6-23d4-4765-a41f-eed96752c52e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.716 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] VM Started (Lifecycle Event)#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.718 2 DEBUG nova.compute.manager [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.722 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.728 2 INFO nova.virt.libvirt.driver [-] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Instance spawned successfully.#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.729 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.749 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.752 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.762 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.762 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.763 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.763 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.764 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.764 2 DEBUG nova.virt.libvirt.driver [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:55:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 306 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 22 KiB/s wr, 158 op/s
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.799 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.799 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154954.715103, 0403d8e6-23d4-4765-a41f-eed96752c52e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.800 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.833 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.840 2 INFO nova.compute.manager [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Took 5.09 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.841 2 DEBUG nova.compute.manager [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.843 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154954.7218509, 0403d8e6-23d4-4765-a41f-eed96752c52e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.844 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:55:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:55:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Oct 10 23:55:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.873 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.877 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:55:54 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.902 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.914 2 INFO nova.compute.manager [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Took 7.40 seconds to build instance.#033[00m
Oct 10 23:55:54 np0005480824 nova_compute[260089]: 2025-10-11 03:55:54.933 2 DEBUG oslo_concurrency.lockutils [None req-86ed3fce-5e2b-491a-b2dc-178e7cce44cb ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:55 np0005480824 nova_compute[260089]: 2025-10-11 03:55:55.489 2 DEBUG nova.compute.manager [req-4a61c1ca-e784-4e46-87be-09337d54c81f req-396518e7-bea0-432d-8dd4-4a896599bcad 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Received event network-vif-plugged-7a5e381a-dccf-47ae-a39a-87d08faf2a0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:55:55 np0005480824 nova_compute[260089]: 2025-10-11 03:55:55.490 2 DEBUG oslo_concurrency.lockutils [req-4a61c1ca-e784-4e46-87be-09337d54c81f req-396518e7-bea0-432d-8dd4-4a896599bcad 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:55:55 np0005480824 nova_compute[260089]: 2025-10-11 03:55:55.490 2 DEBUG oslo_concurrency.lockutils [req-4a61c1ca-e784-4e46-87be-09337d54c81f req-396518e7-bea0-432d-8dd4-4a896599bcad 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:55:55 np0005480824 nova_compute[260089]: 2025-10-11 03:55:55.491 2 DEBUG oslo_concurrency.lockutils [req-4a61c1ca-e784-4e46-87be-09337d54c81f req-396518e7-bea0-432d-8dd4-4a896599bcad 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:55:55 np0005480824 nova_compute[260089]: 2025-10-11 03:55:55.491 2 DEBUG nova.compute.manager [req-4a61c1ca-e784-4e46-87be-09337d54c81f req-396518e7-bea0-432d-8dd4-4a896599bcad 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] No waiting events found dispatching network-vif-plugged-7a5e381a-dccf-47ae-a39a-87d08faf2a0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:55:55 np0005480824 nova_compute[260089]: 2025-10-11 03:55:55.491 2 WARNING nova.compute.manager [req-4a61c1ca-e784-4e46-87be-09337d54c81f req-396518e7-bea0-432d-8dd4-4a896599bcad 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Received unexpected event network-vif-plugged-7a5e381a-dccf-47ae-a39a-87d08faf2a0f for instance with vm_state active and task_state None.#033[00m
Oct 10 23:55:56 np0005480824 nova_compute[260089]: 2025-10-11 03:55:56.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 306 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.2 KiB/s wr, 41 op/s
Oct 10 23:55:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Oct 10 23:55:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Oct 10 23:55:56 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Oct 10 23:55:57 np0005480824 nova_compute[260089]: 2025-10-11 03:55:57.673 2 DEBUG nova.compute.manager [req-8bde19f5-80dc-4995-bc49-f0a9b1ede7f4 req-6f0fe340-875f-474e-9e82-c03c94d18be2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Received event network-changed-7a5e381a-dccf-47ae-a39a-87d08faf2a0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:55:57 np0005480824 nova_compute[260089]: 2025-10-11 03:55:57.674 2 DEBUG nova.compute.manager [req-8bde19f5-80dc-4995-bc49-f0a9b1ede7f4 req-6f0fe340-875f-474e-9e82-c03c94d18be2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Refreshing instance network info cache due to event network-changed-7a5e381a-dccf-47ae-a39a-87d08faf2a0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:55:57 np0005480824 nova_compute[260089]: 2025-10-11 03:55:57.675 2 DEBUG oslo_concurrency.lockutils [req-8bde19f5-80dc-4995-bc49-f0a9b1ede7f4 req-6f0fe340-875f-474e-9e82-c03c94d18be2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:55:57 np0005480824 nova_compute[260089]: 2025-10-11 03:55:57.675 2 DEBUG oslo_concurrency.lockutils [req-8bde19f5-80dc-4995-bc49-f0a9b1ede7f4 req-6f0fe340-875f-474e-9e82-c03c94d18be2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:55:57 np0005480824 nova_compute[260089]: 2025-10-11 03:55:57.676 2 DEBUG nova.network.neutron [req-8bde19f5-80dc-4995-bc49-f0a9b1ede7f4 req-6f0fe340-875f-474e-9e82-c03c94d18be2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Refreshing network info cache for port 7a5e381a-dccf-47ae-a39a-87d08faf2a0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:55:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:55:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:55:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:55:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:55:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:55:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:55:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 306 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 26 KiB/s wr, 227 op/s
Oct 10 23:55:59 np0005480824 nova_compute[260089]: 2025-10-11 03:55:59.023 2 DEBUG nova.compute.manager [req-7d9ec15b-086b-43d2-8b99-a18e71f73b13 req-1c94dd79-2885-4422-bef5-cc88f0ee8791 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Received event network-changed-7a5e381a-dccf-47ae-a39a-87d08faf2a0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:55:59 np0005480824 nova_compute[260089]: 2025-10-11 03:55:59.024 2 DEBUG nova.compute.manager [req-7d9ec15b-086b-43d2-8b99-a18e71f73b13 req-1c94dd79-2885-4422-bef5-cc88f0ee8791 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Refreshing instance network info cache due to event network-changed-7a5e381a-dccf-47ae-a39a-87d08faf2a0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:55:59 np0005480824 nova_compute[260089]: 2025-10-11 03:55:59.025 2 DEBUG oslo_concurrency.lockutils [req-7d9ec15b-086b-43d2-8b99-a18e71f73b13 req-1c94dd79-2885-4422-bef5-cc88f0ee8791 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:55:59 np0005480824 nova_compute[260089]: 2025-10-11 03:55:59.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:55:59 np0005480824 nova_compute[260089]: 2025-10-11 03:55:59.227 2 DEBUG nova.network.neutron [req-8bde19f5-80dc-4995-bc49-f0a9b1ede7f4 req-6f0fe340-875f-474e-9e82-c03c94d18be2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Updated VIF entry in instance network info cache for port 7a5e381a-dccf-47ae-a39a-87d08faf2a0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:55:59 np0005480824 nova_compute[260089]: 2025-10-11 03:55:59.228 2 DEBUG nova.network.neutron [req-8bde19f5-80dc-4995-bc49-f0a9b1ede7f4 req-6f0fe340-875f-474e-9e82-c03c94d18be2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Updating instance_info_cache with network_info: [{"id": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "address": "fa:16:3e:02:fc:94", "network": {"id": "821e091d-e4da-4318-a5fb-3fc44a19fc25", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1425502666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f36ed779ede42228be9ab8544bbf9aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a5e381a-dc", "ovs_interfaceid": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:55:59 np0005480824 nova_compute[260089]: 2025-10-11 03:55:59.250 2 DEBUG oslo_concurrency.lockutils [req-8bde19f5-80dc-4995-bc49-f0a9b1ede7f4 req-6f0fe340-875f-474e-9e82-c03c94d18be2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:55:59 np0005480824 nova_compute[260089]: 2025-10-11 03:55:59.251 2 DEBUG oslo_concurrency.lockutils [req-7d9ec15b-086b-43d2-8b99-a18e71f73b13 req-1c94dd79-2885-4422-bef5-cc88f0ee8791 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:55:59 np0005480824 nova_compute[260089]: 2025-10-11 03:55:59.252 2 DEBUG nova.network.neutron [req-7d9ec15b-086b-43d2-8b99-a18e71f73b13 req-1c94dd79-2885-4422-bef5-cc88f0ee8791 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Refreshing network info cache for port 7a5e381a-dccf-47ae-a39a-87d08faf2a0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:55:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:55:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2047625014' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:55:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:55:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2047625014' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:55:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.718 2 DEBUG oslo_concurrency.lockutils [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.719 2 DEBUG oslo_concurrency.lockutils [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.720 2 DEBUG oslo_concurrency.lockutils [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.720 2 DEBUG oslo_concurrency.lockutils [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.721 2 DEBUG oslo_concurrency.lockutils [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.723 2 INFO nova.compute.manager [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Terminating instance#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.725 2 DEBUG nova.compute.manager [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.791 2 DEBUG nova.network.neutron [req-7d9ec15b-086b-43d2-8b99-a18e71f73b13 req-1c94dd79-2885-4422-bef5-cc88f0ee8791 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Updated VIF entry in instance network info cache for port 7a5e381a-dccf-47ae-a39a-87d08faf2a0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.795 2 DEBUG nova.network.neutron [req-7d9ec15b-086b-43d2-8b99-a18e71f73b13 req-1c94dd79-2885-4422-bef5-cc88f0ee8791 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Updating instance_info_cache with network_info: [{"id": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "address": "fa:16:3e:02:fc:94", "network": {"id": "821e091d-e4da-4318-a5fb-3fc44a19fc25", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1425502666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f36ed779ede42228be9ab8544bbf9aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a5e381a-dc", "ovs_interfaceid": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:56:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 306 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 179 op/s
Oct 10 23:56:00 np0005480824 kernel: tapeb363ce6-15 (unregistering): left promiscuous mode
Oct 10 23:56:00 np0005480824 NetworkManager[44969]: <info>  [1760154960.8197] device (tapeb363ce6-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.820 2 DEBUG oslo_concurrency.lockutils [req-7d9ec15b-086b-43d2-8b99-a18e71f73b13 req-1c94dd79-2885-4422-bef5-cc88f0ee8791 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:56:00 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:00Z|00176|binding|INFO|Releasing lport eb363ce6-15fe-4b2a-a35e-06b06bbf4252 from this chassis (sb_readonly=0)
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:00 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:00Z|00177|binding|INFO|Setting lport eb363ce6-15fe-4b2a-a35e-06b06bbf4252 down in Southbound
Oct 10 23:56:00 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:00Z|00178|binding|INFO|Removing iface tapeb363ce6-15 ovn-installed in OVS
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:00 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:00.849 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:bc:e9 10.100.0.14'], port_security=['fa:16:3e:f7:bc:e9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ee0ba1fa-8740-4670-9f6d-b658f89f7f21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d21391a321476eb133317b3402b0f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '48328b99-2dfb-4da6-bd97-8cd4f810b350', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d98e64fb-092d-4777-b741-426f3e849bc3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=eb363ce6-15fe-4b2a-a35e-06b06bbf4252) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:56:00 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:00.850 162245 INFO neutron.agent.ovn.metadata.agent [-] Port eb363ce6-15fe-4b2a-a35e-06b06bbf4252 in datapath 359720eb-a957-4bcd-b9b2-3cf7dad947e4 unbound from our chassis#033[00m
Oct 10 23:56:00 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:00.852 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 359720eb-a957-4bcd-b9b2-3cf7dad947e4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:56:00 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:00.853 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8262ba6f-53b8-4f42-8a5a-36239ed9847a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:00 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:00.857 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 namespace which is not needed anymore#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:00 np0005480824 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 10 23:56:00 np0005480824 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 14.250s CPU time.
Oct 10 23:56:00 np0005480824 systemd-machined[215071]: Machine qemu-18-instance-00000012 terminated.
Oct 10 23:56:00 np0005480824 podman[289694]: 2025-10-11 03:56:00.943051403 +0000 UTC m=+0.097575509 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 23:56:00 np0005480824 podman[289697]: 2025-10-11 03:56:00.950031678 +0000 UTC m=+0.090776198 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.976 2 INFO nova.virt.libvirt.driver [-] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Instance destroyed successfully.#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.978 2 DEBUG nova.objects.instance [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lazy-loading 'resources' on Instance uuid ee0ba1fa-8740-4670-9f6d-b658f89f7f21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.992 2 DEBUG nova.virt.libvirt.vif [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:55:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-864862648',display_name='tempest-TestVolumeBootPattern-server-864862648',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-864862648',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHKqCtFesGlIN9DGdSuPEGCilj3bKmCIyQ2Hx4tQRLuRoOqWjhIRAgPC71aK1tfMSZbOh/7KRfo7uhOOwgBdYVdW77mjMG+sfmvlDoQnrLmEWQMeSschoC2XBAsdgkOOQ==',key_name='tempest-TestVolumeBootPattern-1691748970',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:55:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-rni1kdob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:55:25Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=ee0ba1fa-8740-4670-9f6d-b658f89f7f21,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "address": "fa:16:3e:f7:bc:e9", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb363ce6-15", "ovs_interfaceid": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.993 2 DEBUG nova.network.os_vif_util [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "address": "fa:16:3e:f7:bc:e9", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb363ce6-15", "ovs_interfaceid": "eb363ce6-15fe-4b2a-a35e-06b06bbf4252", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.994 2 DEBUG nova.network.os_vif_util [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:bc:e9,bridge_name='br-int',has_traffic_filtering=True,id=eb363ce6-15fe-4b2a-a35e-06b06bbf4252,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb363ce6-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.995 2 DEBUG os_vif [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:bc:e9,bridge_name='br-int',has_traffic_filtering=True,id=eb363ce6-15fe-4b2a-a35e-06b06bbf4252,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb363ce6-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:00 np0005480824 nova_compute[260089]: 2025-10-11 03:56:00.998 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb363ce6-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.006 2 INFO os_vif [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:bc:e9,bridge_name='br-int',has_traffic_filtering=True,id=eb363ce6-15fe-4b2a-a35e-06b06bbf4252,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb363ce6-15')#033[00m
Oct 10 23:56:01 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[289191]: [NOTICE]   (289201) : haproxy version is 2.8.14-c23fe91
Oct 10 23:56:01 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[289191]: [NOTICE]   (289201) : path to executable is /usr/sbin/haproxy
Oct 10 23:56:01 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[289191]: [WARNING]  (289201) : Exiting Master process...
Oct 10 23:56:01 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[289191]: [ALERT]    (289201) : Current worker (289205) exited with code 143 (Terminated)
Oct 10 23:56:01 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[289191]: [WARNING]  (289201) : All workers exited. Exiting... (0)
Oct 10 23:56:01 np0005480824 systemd[1]: libpod-529ef65ecb09e3de63fd5d15f3cabe1b5b6395dbe514a8bbd588a7caa074468e.scope: Deactivated successfully.
Oct 10 23:56:01 np0005480824 podman[289757]: 2025-10-11 03:56:01.032234472 +0000 UTC m=+0.065170963 container died 529ef65ecb09e3de63fd5d15f3cabe1b5b6395dbe514a8bbd588a7caa074468e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:56:01 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-529ef65ecb09e3de63fd5d15f3cabe1b5b6395dbe514a8bbd588a7caa074468e-userdata-shm.mount: Deactivated successfully.
Oct 10 23:56:01 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c4382972424299ebfa7b43754369e444883c481e7f606c3e85ba442760976a8b-merged.mount: Deactivated successfully.
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.096 2 DEBUG nova.compute.manager [req-9c11c222-91c6-45fb-aaee-d69f9a83f5de req-d60759f1-afb3-4d74-b523-8b52af6b5a9b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Received event network-changed-7a5e381a-dccf-47ae-a39a-87d08faf2a0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.097 2 DEBUG nova.compute.manager [req-9c11c222-91c6-45fb-aaee-d69f9a83f5de req-d60759f1-afb3-4d74-b523-8b52af6b5a9b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Refreshing instance network info cache due to event network-changed-7a5e381a-dccf-47ae-a39a-87d08faf2a0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.097 2 DEBUG oslo_concurrency.lockutils [req-9c11c222-91c6-45fb-aaee-d69f9a83f5de req-d60759f1-afb3-4d74-b523-8b52af6b5a9b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.097 2 DEBUG oslo_concurrency.lockutils [req-9c11c222-91c6-45fb-aaee-d69f9a83f5de req-d60759f1-afb3-4d74-b523-8b52af6b5a9b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.097 2 DEBUG nova.network.neutron [req-9c11c222-91c6-45fb-aaee-d69f9a83f5de req-d60759f1-afb3-4d74-b523-8b52af6b5a9b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Refreshing network info cache for port 7a5e381a-dccf-47ae-a39a-87d08faf2a0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:56:01 np0005480824 podman[289757]: 2025-10-11 03:56:01.100046685 +0000 UTC m=+0.132983176 container cleanup 529ef65ecb09e3de63fd5d15f3cabe1b5b6395dbe514a8bbd588a7caa074468e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:56:01 np0005480824 systemd[1]: libpod-conmon-529ef65ecb09e3de63fd5d15f3cabe1b5b6395dbe514a8bbd588a7caa074468e.scope: Deactivated successfully.
Oct 10 23:56:01 np0005480824 podman[289810]: 2025-10-11 03:56:01.186901158 +0000 UTC m=+0.062500888 container remove 529ef65ecb09e3de63fd5d15f3cabe1b5b6395dbe514a8bbd588a7caa074468e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:56:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:01.195 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[f35edff1-bc43-4fe3-95b0-d0716b9c56ea]: (4, ('Sat Oct 11 03:56:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 (529ef65ecb09e3de63fd5d15f3cabe1b5b6395dbe514a8bbd588a7caa074468e)\n529ef65ecb09e3de63fd5d15f3cabe1b5b6395dbe514a8bbd588a7caa074468e\nSat Oct 11 03:56:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 (529ef65ecb09e3de63fd5d15f3cabe1b5b6395dbe514a8bbd588a7caa074468e)\n529ef65ecb09e3de63fd5d15f3cabe1b5b6395dbe514a8bbd588a7caa074468e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:01.197 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ba47b3-576a-4f9c-a649-b1e4a459b45a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:01.199 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359720eb-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:01 np0005480824 kernel: tap359720eb-a0: left promiscuous mode
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:01.218 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4a0ad5ac-61a5-4584-9951-6be68c0f2316]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:01.253 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7cdcee16-6363-4ab6-a64d-447461cc50af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:01.255 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[914f78da-ca9f-45d6-8b9d-4fe5e1551235]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.272 2 INFO nova.virt.libvirt.driver [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Deleting instance files /var/lib/nova/instances/ee0ba1fa-8740-4670-9f6d-b658f89f7f21_del#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.273 2 INFO nova.virt.libvirt.driver [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Deletion of /var/lib/nova/instances/ee0ba1fa-8740-4670-9f6d-b658f89f7f21_del complete#033[00m
Oct 10 23:56:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:01.275 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4e31c7a1-e2b0-4c11-a2e9-5f0cab7d677e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439188, 'reachable_time': 23755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289827, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:01 np0005480824 systemd[1]: run-netns-ovnmeta\x2d359720eb\x2da957\x2d4bcd\x2db9b2\x2d3cf7dad947e4.mount: Deactivated successfully.
Oct 10 23:56:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:01.281 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:56:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:01.281 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[ab74f840-9742-4cda-a64c-561add5517ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.317 2 INFO nova.compute.manager [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.318 2 DEBUG oslo.service.loopingcall [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.318 2 DEBUG nova.compute.manager [-] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.319 2 DEBUG nova.network.neutron [-] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.665 2 DEBUG nova.compute.manager [req-42e30e09-84e5-4451-b7a1-9afada6197df req-8c465f7d-cb06-4234-8d74-560137a0b050 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Received event network-vif-unplugged-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.666 2 DEBUG oslo_concurrency.lockutils [req-42e30e09-84e5-4451-b7a1-9afada6197df req-8c465f7d-cb06-4234-8d74-560137a0b050 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.666 2 DEBUG oslo_concurrency.lockutils [req-42e30e09-84e5-4451-b7a1-9afada6197df req-8c465f7d-cb06-4234-8d74-560137a0b050 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.666 2 DEBUG oslo_concurrency.lockutils [req-42e30e09-84e5-4451-b7a1-9afada6197df req-8c465f7d-cb06-4234-8d74-560137a0b050 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.666 2 DEBUG nova.compute.manager [req-42e30e09-84e5-4451-b7a1-9afada6197df req-8c465f7d-cb06-4234-8d74-560137a0b050 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] No waiting events found dispatching network-vif-unplugged-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:56:01 np0005480824 nova_compute[260089]: 2025-10-11 03:56:01.667 2 DEBUG nova.compute.manager [req-42e30e09-84e5-4451-b7a1-9afada6197df req-8c465f7d-cb06-4234-8d74-560137a0b050 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Received event network-vif-unplugged-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:56:02 np0005480824 nova_compute[260089]: 2025-10-11 03:56:02.180 2 DEBUG nova.network.neutron [-] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:56:02 np0005480824 nova_compute[260089]: 2025-10-11 03:56:02.201 2 INFO nova.compute.manager [-] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Took 0.88 seconds to deallocate network for instance.#033[00m
Oct 10 23:56:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Oct 10 23:56:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Oct 10 23:56:02 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Oct 10 23:56:02 np0005480824 nova_compute[260089]: 2025-10-11 03:56:02.531 2 INFO nova.compute.manager [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Took 0.33 seconds to detach 1 volumes for instance.#033[00m
Oct 10 23:56:02 np0005480824 nova_compute[260089]: 2025-10-11 03:56:02.610 2 DEBUG oslo_concurrency.lockutils [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:02 np0005480824 nova_compute[260089]: 2025-10-11 03:56:02.611 2 DEBUG oslo_concurrency.lockutils [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:02 np0005480824 nova_compute[260089]: 2025-10-11 03:56:02.703 2 DEBUG oslo_concurrency.processutils [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:02 np0005480824 nova_compute[260089]: 2025-10-11 03:56:02.731 2 DEBUG nova.network.neutron [req-9c11c222-91c6-45fb-aaee-d69f9a83f5de req-d60759f1-afb3-4d74-b523-8b52af6b5a9b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Updated VIF entry in instance network info cache for port 7a5e381a-dccf-47ae-a39a-87d08faf2a0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:56:02 np0005480824 nova_compute[260089]: 2025-10-11 03:56:02.732 2 DEBUG nova.network.neutron [req-9c11c222-91c6-45fb-aaee-d69f9a83f5de req-d60759f1-afb3-4d74-b523-8b52af6b5a9b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Updating instance_info_cache with network_info: [{"id": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "address": "fa:16:3e:02:fc:94", "network": {"id": "821e091d-e4da-4318-a5fb-3fc44a19fc25", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1425502666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f36ed779ede42228be9ab8544bbf9aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a5e381a-dc", "ovs_interfaceid": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:56:02 np0005480824 nova_compute[260089]: 2025-10-11 03:56:02.748 2 DEBUG oslo_concurrency.lockutils [req-9c11c222-91c6-45fb-aaee-d69f9a83f5de req-d60759f1-afb3-4d74-b523-8b52af6b5a9b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:56:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 306 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 51 KiB/s wr, 224 op/s
Oct 10 23:56:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:56:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/186386267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:56:03 np0005480824 nova_compute[260089]: 2025-10-11 03:56:03.099 2 DEBUG oslo_concurrency.processutils [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:03 np0005480824 nova_compute[260089]: 2025-10-11 03:56:03.106 2 DEBUG nova.compute.provider_tree [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:56:03 np0005480824 nova_compute[260089]: 2025-10-11 03:56:03.134 2 DEBUG nova.scheduler.client.report [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:56:03 np0005480824 nova_compute[260089]: 2025-10-11 03:56:03.161 2 DEBUG oslo_concurrency.lockutils [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:03 np0005480824 nova_compute[260089]: 2025-10-11 03:56:03.185 2 INFO nova.scheduler.client.report [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Deleted allocations for instance ee0ba1fa-8740-4670-9f6d-b658f89f7f21#033[00m
Oct 10 23:56:03 np0005480824 nova_compute[260089]: 2025-10-11 03:56:03.207 2 DEBUG nova.compute.manager [req-861796fd-4f07-4fbe-a553-8f3e7ba776dc req-d2c8bc98-18f6-42bf-9d8b-dde40102001f 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Received event network-vif-deleted-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:56:03 np0005480824 nova_compute[260089]: 2025-10-11 03:56:03.261 2 DEBUG oslo_concurrency.lockutils [None req-6d66f5cc-7a44-4f92-84e0-b19ce1824275 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/691319074' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/691319074' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:03 np0005480824 nova_compute[260089]: 2025-10-11 03:56:03.782 2 DEBUG nova.compute.manager [req-83bc4054-d1a3-4922-9b7c-cddc05baeb7e req-8c8712ba-b5aa-4eb0-a12c-3a21ce32ed48 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Received event network-vif-plugged-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:56:03 np0005480824 nova_compute[260089]: 2025-10-11 03:56:03.783 2 DEBUG oslo_concurrency.lockutils [req-83bc4054-d1a3-4922-9b7c-cddc05baeb7e req-8c8712ba-b5aa-4eb0-a12c-3a21ce32ed48 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:03 np0005480824 nova_compute[260089]: 2025-10-11 03:56:03.783 2 DEBUG oslo_concurrency.lockutils [req-83bc4054-d1a3-4922-9b7c-cddc05baeb7e req-8c8712ba-b5aa-4eb0-a12c-3a21ce32ed48 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:03 np0005480824 nova_compute[260089]: 2025-10-11 03:56:03.783 2 DEBUG oslo_concurrency.lockutils [req-83bc4054-d1a3-4922-9b7c-cddc05baeb7e req-8c8712ba-b5aa-4eb0-a12c-3a21ce32ed48 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "ee0ba1fa-8740-4670-9f6d-b658f89f7f21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:03 np0005480824 nova_compute[260089]: 2025-10-11 03:56:03.784 2 DEBUG nova.compute.manager [req-83bc4054-d1a3-4922-9b7c-cddc05baeb7e req-8c8712ba-b5aa-4eb0-a12c-3a21ce32ed48 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] No waiting events found dispatching network-vif-plugged-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:56:03 np0005480824 nova_compute[260089]: 2025-10-11 03:56:03.784 2 WARNING nova.compute.manager [req-83bc4054-d1a3-4922-9b7c-cddc05baeb7e req-8c8712ba-b5aa-4eb0-a12c-3a21ce32ed48 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Received unexpected event network-vif-plugged-eb363ce6-15fe-4b2a-a35e-06b06bbf4252 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 23:56:04 np0005480824 nova_compute[260089]: 2025-10-11 03:56:04.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 306 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 50 KiB/s wr, 222 op/s
Oct 10 23:56:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:56:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Oct 10 23:56:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Oct 10 23:56:04 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Oct 10 23:56:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2962347229' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2962347229' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:06 np0005480824 nova_compute[260089]: 2025-10-11 03:56:06.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 321 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 129 KiB/s rd, 1.6 MiB/s wr, 89 op/s
Oct 10 23:56:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3015887161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3015887161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:07 np0005480824 nova_compute[260089]: 2025-10-11 03:56:07.388 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "3ccdaa3b-882a-432f-b619-002ded45ac60" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:07 np0005480824 nova_compute[260089]: 2025-10-11 03:56:07.389 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:07 np0005480824 nova_compute[260089]: 2025-10-11 03:56:07.406 2 DEBUG nova.compute.manager [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:56:07 np0005480824 nova_compute[260089]: 2025-10-11 03:56:07.471 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:07 np0005480824 nova_compute[260089]: 2025-10-11 03:56:07.472 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:07 np0005480824 nova_compute[260089]: 2025-10-11 03:56:07.479 2 DEBUG nova.virt.hardware [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:56:07 np0005480824 nova_compute[260089]: 2025-10-11 03:56:07.479 2 INFO nova.compute.claims [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:56:07 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:07Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:fc:94 10.100.0.6
Oct 10 23:56:07 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:07Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:fc:94 10.100.0.6
Oct 10 23:56:07 np0005480824 nova_compute[260089]: 2025-10-11 03:56:07.588 2 DEBUG oslo_concurrency.processutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:56:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/740908377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.043 2 DEBUG oslo_concurrency.processutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.055 2 DEBUG nova.compute.provider_tree [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:56:08 np0005480824 podman[289872]: 2025-10-11 03:56:08.06400477 +0000 UTC m=+0.106648303 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.078 2 DEBUG nova.scheduler.client.report [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.100 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.100 2 DEBUG nova.compute.manager [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.139 2 DEBUG nova.compute.manager [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.140 2 DEBUG nova.network.neutron [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.157 2 INFO nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.176 2 DEBUG nova.compute.manager [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.220 2 INFO nova.virt.block_device [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Booting with volume df022fd8-30bb-4c20-bf5c-0866de956c6d at /dev/vda#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.315 2 DEBUG nova.policy [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38ebc503771e417aaf1f3aea0c835994', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55d21391a321476eb133317b3402b0f0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.357 2 DEBUG os_brick.utils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.358 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.368 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.368 676 DEBUG oslo.privsep.daemon [-] privsep: reply[912ce358-0b9e-40a9-b3ae-ba75c471c4f0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.369 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.376 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.377 676 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5029fb-891e-4333-b482-df564c0c279a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.378 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.386 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.386 676 DEBUG oslo.privsep.daemon [-] privsep: reply[f4361c3a-2017-4ab1-b124-c07ac9ab2f17]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.388 676 DEBUG oslo.privsep.daemon [-] privsep: reply[acd118ae-5de2-459a-bb91-56d09d924c9e]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.389 2 DEBUG oslo_concurrency.processutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.416 2 DEBUG oslo_concurrency.processutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.421 2 DEBUG os_brick.initiator.connectors.lightos [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.422 2 DEBUG os_brick.initiator.connectors.lightos [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.423 2 DEBUG os_brick.initiator.connectors.lightos [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.423 2 DEBUG os_brick.utils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.424 2 DEBUG nova.virt.block_device [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Updating existing volume attachment record: 53d7a19e-3bd1-486a-aac7-5367e03580e0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:56:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 336 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 231 KiB/s rd, 3.1 MiB/s wr, 177 op/s
Oct 10 23:56:08 np0005480824 nova_compute[260089]: 2025-10-11 03:56:08.965 2 DEBUG nova.network.neutron [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Successfully created port: 3d1404de-38bf-4d1c-960e-bcc14817fcc6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:56:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1214031922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.542 2 DEBUG nova.compute.manager [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.544 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.544 2 INFO nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Creating image(s)#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.545 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.545 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Ensure instance console log exists: /var/lib/nova/instances/3ccdaa3b-882a-432f-b619-002ded45ac60/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.545 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.546 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.546 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.675 2 DEBUG nova.network.neutron [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Successfully updated port: 3d1404de-38bf-4d1c-960e-bcc14817fcc6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.694 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.694 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquired lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.695 2 DEBUG nova.network.neutron [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.792 2 DEBUG nova.compute.manager [req-da4c33cd-7bad-488c-8b59-116a655ad560 req-5aac7be4-752e-47e7-946a-b1acf5b90e36 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Received event network-changed-3d1404de-38bf-4d1c-960e-bcc14817fcc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.792 2 DEBUG nova.compute.manager [req-da4c33cd-7bad-488c-8b59-116a655ad560 req-5aac7be4-752e-47e7-946a-b1acf5b90e36 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Refreshing instance network info cache due to event network-changed-3d1404de-38bf-4d1c-960e-bcc14817fcc6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.793 2 DEBUG oslo_concurrency.lockutils [req-da4c33cd-7bad-488c-8b59-116a655ad560 req-5aac7be4-752e-47e7-946a-b1acf5b90e36 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:56:09 np0005480824 nova_compute[260089]: 2025-10-11 03:56:09.858 2 DEBUG nova.network.neutron [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:56:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:56:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:10.500 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:10.502 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:10.503 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 336 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 193 KiB/s rd, 2.9 MiB/s wr, 129 op/s
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.011 2 DEBUG nova.network.neutron [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Updating instance_info_cache with network_info: [{"id": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "address": "fa:16:3e:0f:58:4c", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d1404de-38", "ovs_interfaceid": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.038 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Releasing lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.038 2 DEBUG nova.compute.manager [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Instance network_info: |[{"id": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "address": "fa:16:3e:0f:58:4c", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d1404de-38", "ovs_interfaceid": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.038 2 DEBUG oslo_concurrency.lockutils [req-da4c33cd-7bad-488c-8b59-116a655ad560 req-5aac7be4-752e-47e7-946a-b1acf5b90e36 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.039 2 DEBUG nova.network.neutron [req-da4c33cd-7bad-488c-8b59-116a655ad560 req-5aac7be4-752e-47e7-946a-b1acf5b90e36 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Refreshing network info cache for port 3d1404de-38bf-4d1c-960e-bcc14817fcc6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.042 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Start _get_guest_xml network_info=[{"id": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "address": "fa:16:3e:0f:58:4c", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d1404de-38", "ovs_interfaceid": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '53d7a19e-3bd1-486a-aac7-5367e03580e0', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-df022fd8-30bb-4c20-bf5c-0866de956c6d', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'df022fd8-30bb-4c20-bf5c-0866de956c6d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '3ccdaa3b-882a-432f-b619-002ded45ac60', 'attached_at': '', 'detached_at': '', 'volume_id': 'df022fd8-30bb-4c20-bf5c-0866de956c6d', 'serial': 'df022fd8-30bb-4c20-bf5c-0866de956c6d'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.048 2 WARNING nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.052 2 DEBUG nova.virt.libvirt.host [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.053 2 DEBUG nova.virt.libvirt.host [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.064 2 DEBUG nova.virt.libvirt.host [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.065 2 DEBUG nova.virt.libvirt.host [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.065 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.066 2 DEBUG nova.virt.hardware [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.067 2 DEBUG nova.virt.hardware [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.067 2 DEBUG nova.virt.hardware [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.068 2 DEBUG nova.virt.hardware [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.068 2 DEBUG nova.virt.hardware [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.068 2 DEBUG nova.virt.hardware [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.069 2 DEBUG nova.virt.hardware [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.069 2 DEBUG nova.virt.hardware [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.070 2 DEBUG nova.virt.hardware [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.070 2 DEBUG nova.virt.hardware [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.071 2 DEBUG nova.virt.hardware [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.107 2 DEBUG nova.storage.rbd_utils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 3ccdaa3b-882a-432f-b619-002ded45ac60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.113 2 DEBUG oslo_concurrency.processutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:56:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/527994564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.540 2 DEBUG oslo_concurrency.processutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.571 2 DEBUG nova.virt.libvirt.vif [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1815523405',display_name='tempest-TestVolumeBootPattern-server-1815523405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1815523405',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHKqCtFesGlIN9DGdSuPEGCilj3bKmCIyQ2Hx4tQRLuRoOqWjhIRAgPC71aK1tfMSZbOh/7KRfo7uhOOwgBdYVdW77mjMG+sfmvlDoQnrLmEWQMeSschoC2XBAsdgkOOQ==',key_name='tempest-TestVolumeBootPattern-1691748970',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-8d8xh3fp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:56:08Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=3ccdaa3b-882a-432f-b619-002ded45ac60,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "address": "fa:16:3e:0f:58:4c", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d1404de-38", "ovs_interfaceid": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.572 2 DEBUG nova.network.os_vif_util [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "address": "fa:16:3e:0f:58:4c", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d1404de-38", "ovs_interfaceid": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.572 2 DEBUG nova.network.os_vif_util [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:58:4c,bridge_name='br-int',has_traffic_filtering=True,id=3d1404de-38bf-4d1c-960e-bcc14817fcc6,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d1404de-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.574 2 DEBUG nova.objects.instance [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ccdaa3b-882a-432f-b619-002ded45ac60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.587 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  <uuid>3ccdaa3b-882a-432f-b619-002ded45ac60</uuid>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  <name>instance-00000014</name>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <nova:name>tempest-TestVolumeBootPattern-server-1815523405</nova:name>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:56:11</nova:creationTime>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:        <nova:user uuid="38ebc503771e417aaf1f3aea0c835994">tempest-TestVolumeBootPattern-739984652-project-member</nova:user>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:        <nova:project uuid="55d21391a321476eb133317b3402b0f0">tempest-TestVolumeBootPattern-739984652</nova:project>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:        <nova:port uuid="3d1404de-38bf-4d1c-960e-bcc14817fcc6">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <entry name="serial">3ccdaa3b-882a-432f-b619-002ded45ac60</entry>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <entry name="uuid">3ccdaa3b-882a-432f-b619-002ded45ac60</entry>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/3ccdaa3b-882a-432f-b619-002ded45ac60_disk.config">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-df022fd8-30bb-4c20-bf5c-0866de956c6d">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <serial>df022fd8-30bb-4c20-bf5c-0866de956c6d</serial>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:0f:58:4c"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <target dev="tap3d1404de-38"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/3ccdaa3b-882a-432f-b619-002ded45ac60/console.log" append="off"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:56:11 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:56:11 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:56:11 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:56:11 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.589 2 DEBUG nova.compute.manager [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Preparing to wait for external event network-vif-plugged-3d1404de-38bf-4d1c-960e-bcc14817fcc6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.589 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.590 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.590 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.592 2 DEBUG nova.virt.libvirt.vif [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1815523405',display_name='tempest-TestVolumeBootPattern-server-1815523405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1815523405',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHKqCtFesGlIN9DGdSuPEGCilj3bKmCIyQ2Hx4tQRLuRoOqWjhIRAgPC71aK1tfMSZbOh/7KRfo7uhOOwgBdYVdW77mjMG+sfmvlDoQnrLmEWQMeSschoC2XBAsdgkOOQ==',key_name='tempest-TestVolumeBootPattern-1691748970',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-8d8xh3fp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:56:08Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=3ccdaa3b-882a-432f-b619-002ded45ac60,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "address": "fa:16:3e:0f:58:4c", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d1404de-38", "ovs_interfaceid": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.592 2 DEBUG nova.network.os_vif_util [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "address": "fa:16:3e:0f:58:4c", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d1404de-38", "ovs_interfaceid": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.593 2 DEBUG nova.network.os_vif_util [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:58:4c,bridge_name='br-int',has_traffic_filtering=True,id=3d1404de-38bf-4d1c-960e-bcc14817fcc6,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d1404de-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.594 2 DEBUG os_vif [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:58:4c,bridge_name='br-int',has_traffic_filtering=True,id=3d1404de-38bf-4d1c-960e-bcc14817fcc6,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d1404de-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.596 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.596 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.600 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d1404de-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3d1404de-38, col_values=(('external_ids', {'iface-id': '3d1404de-38bf-4d1c-960e-bcc14817fcc6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:58:4c', 'vm-uuid': '3ccdaa3b-882a-432f-b619-002ded45ac60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:56:11 np0005480824 NetworkManager[44969]: <info>  [1760154971.6047] manager: (tap3d1404de-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.609 2 INFO os_vif [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:58:4c,bridge_name='br-int',has_traffic_filtering=True,id=3d1404de-38bf-4d1c-960e-bcc14817fcc6,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d1404de-38')#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.657 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.658 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.658 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No VIF found with MAC fa:16:3e:0f:58:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.658 2 INFO nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Using config drive#033[00m
Oct 10 23:56:11 np0005480824 nova_compute[260089]: 2025-10-11 03:56:11.680 2 DEBUG nova.storage.rbd_utils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 3ccdaa3b-882a-432f-b619-002ded45ac60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.006 2 INFO nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Creating config drive at /var/lib/nova/instances/3ccdaa3b-882a-432f-b619-002ded45ac60/disk.config#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.016 2 DEBUG oslo_concurrency.processutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ccdaa3b-882a-432f-b619-002ded45ac60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf2ha4429 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.147 2 DEBUG oslo_concurrency.processutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ccdaa3b-882a-432f-b619-002ded45ac60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf2ha4429" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.184 2 DEBUG nova.storage.rbd_utils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 3ccdaa3b-882a-432f-b619-002ded45ac60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.189 2 DEBUG oslo_concurrency.processutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ccdaa3b-882a-432f-b619-002ded45ac60/disk.config 3ccdaa3b-882a-432f-b619-002ded45ac60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.364 2 DEBUG oslo_concurrency.processutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ccdaa3b-882a-432f-b619-002ded45ac60/disk.config 3ccdaa3b-882a-432f-b619-002ded45ac60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.366 2 INFO nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Deleting local config drive /var/lib/nova/instances/3ccdaa3b-882a-432f-b619-002ded45ac60/disk.config because it was imported into RBD.#033[00m
Oct 10 23:56:12 np0005480824 NetworkManager[44969]: <info>  [1760154972.4292] manager: (tap3d1404de-38): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Oct 10 23:56:12 np0005480824 kernel: tap3d1404de-38: entered promiscuous mode
Oct 10 23:56:12 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:12Z|00179|binding|INFO|Claiming lport 3d1404de-38bf-4d1c-960e-bcc14817fcc6 for this chassis.
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:12 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:12Z|00180|binding|INFO|3d1404de-38bf-4d1c-960e-bcc14817fcc6: Claiming fa:16:3e:0f:58:4c 10.100.0.12
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.444 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:58:4c 10.100.0.12'], port_security=['fa:16:3e:0f:58:4c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3ccdaa3b-882a-432f-b619-002ded45ac60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d21391a321476eb133317b3402b0f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '48328b99-2dfb-4da6-bd97-8cd4f810b350', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d98e64fb-092d-4777-b741-426f3e849bc3, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=3d1404de-38bf-4d1c-960e-bcc14817fcc6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.445 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 3d1404de-38bf-4d1c-960e-bcc14817fcc6 in datapath 359720eb-a957-4bcd-b9b2-3cf7dad947e4 bound to our chassis#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.447 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 359720eb-a957-4bcd-b9b2-3cf7dad947e4#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.459 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[389f40e8-e60f-4e2c-b28f-b562cd1a5970]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.460 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap359720eb-a1 in ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.462 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap359720eb-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.463 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1a85a447-a3d9-4723-b4a5-ea53cce4dd92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.464 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[553d0ada-6369-45d4-9e9a-7b147f083a96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:12Z|00181|binding|INFO|Setting lport 3d1404de-38bf-4d1c-960e-bcc14817fcc6 ovn-installed in OVS
Oct 10 23:56:12 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:12Z|00182|binding|INFO|Setting lport 3d1404de-38bf-4d1c-960e-bcc14817fcc6 up in Southbound
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:12 np0005480824 systemd-machined[215071]: New machine qemu-20-instance-00000014.
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.477 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a2357f-adb6-47f1-bfb2-564b96b5019c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.491 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[39de48d7-d94b-4e97-88bd-7c5ceb0e1f0d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 systemd[1]: Started Virtual Machine qemu-20-instance-00000014.
Oct 10 23:56:12 np0005480824 systemd-udevd[290023]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:56:12 np0005480824 NetworkManager[44969]: <info>  [1760154972.5175] device (tap3d1404de-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:56:12 np0005480824 NetworkManager[44969]: <info>  [1760154972.5185] device (tap3d1404de-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.525 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[435c21dc-a17f-4291-947c-f61fad12ff37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.530 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ab10c5fb-74ea-4e41-aa6f-8d0c37f51bc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 NetworkManager[44969]: <info>  [1760154972.5314] manager: (tap359720eb-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/105)
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.562 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[39e1f3f2-60f0-4ceb-a881-398b10bd7cc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.566 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[e30e9538-3805-4237-b761-dede4b4b8c75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 NetworkManager[44969]: <info>  [1760154972.5966] device (tap359720eb-a0): carrier: link connected
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.603 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[6c51a69f-1166-4c86-9979-d8eeb6f238a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.622 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[312d5005-2163-4438-8901-39c47cc15279]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap359720eb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:90:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444025, 'reachable_time': 32126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290052, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.644 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea8c6f4-1949-4d36-ac7e-3b9aab9071ae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:90b3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444025, 'tstamp': 444025}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290053, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.664 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[6db50612-5682-4b5a-8080-22f69e48ee6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap359720eb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:90:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444025, 'reachable_time': 32126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290054, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.705 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e395891d-1b95-4e7a-82fb-333d3162c7e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.745 2 DEBUG nova.network.neutron [req-da4c33cd-7bad-488c-8b59-116a655ad560 req-5aac7be4-752e-47e7-946a-b1acf5b90e36 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Updated VIF entry in instance network info cache for port 3d1404de-38bf-4d1c-960e-bcc14817fcc6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.746 2 DEBUG nova.network.neutron [req-da4c33cd-7bad-488c-8b59-116a655ad560 req-5aac7be4-752e-47e7-946a-b1acf5b90e36 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Updating instance_info_cache with network_info: [{"id": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "address": "fa:16:3e:0f:58:4c", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d1404de-38", "ovs_interfaceid": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.769 2 DEBUG oslo_concurrency.lockutils [req-da4c33cd-7bad-488c-8b59-116a655ad560 req-5aac7be4-752e-47e7-946a-b1acf5b90e36 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.791 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[73c8f920-eae3-422b-9f46-ee22582a9c4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.793 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359720eb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.793 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.793 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap359720eb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:56:12 np0005480824 NetworkManager[44969]: <info>  [1760154972.7968] manager: (tap359720eb-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Oct 10 23:56:12 np0005480824 kernel: tap359720eb-a0: entered promiscuous mode
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.801 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap359720eb-a0, col_values=(('external_ids', {'iface-id': '039c7668-0b85-4466-9c66-62531405028d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:56:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 348 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.6 MiB/s wr, 142 op/s
Oct 10 23:56:12 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:12Z|00183|binding|INFO|Releasing lport 039c7668-0b85-4466-9c66-62531405028d from this chassis (sb_readonly=0)
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.829 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.831 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b42aacf3-e591-45eb-9e32-ad9a41162325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.833 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-359720eb-a957-4bcd-b9b2-3cf7dad947e4
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/359720eb-a957-4bcd-b9b2-3cf7dad947e4.pid.haproxy
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 359720eb-a957-4bcd-b9b2-3cf7dad947e4
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:56:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:12.837 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'env', 'PROCESS_TAG=haproxy-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/359720eb-a957-4bcd-b9b2-3cf7dad947e4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.918 2 DEBUG nova.compute.manager [req-04ce9f31-6398-49ca-9338-49f98283e79d req-77e39a69-034b-41e2-85eb-6504779fd59a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Received event network-vif-plugged-3d1404de-38bf-4d1c-960e-bcc14817fcc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.919 2 DEBUG oslo_concurrency.lockutils [req-04ce9f31-6398-49ca-9338-49f98283e79d req-77e39a69-034b-41e2-85eb-6504779fd59a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.920 2 DEBUG oslo_concurrency.lockutils [req-04ce9f31-6398-49ca-9338-49f98283e79d req-77e39a69-034b-41e2-85eb-6504779fd59a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.920 2 DEBUG oslo_concurrency.lockutils [req-04ce9f31-6398-49ca-9338-49f98283e79d req-77e39a69-034b-41e2-85eb-6504779fd59a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:12 np0005480824 nova_compute[260089]: 2025-10-11 03:56:12.921 2 DEBUG nova.compute.manager [req-04ce9f31-6398-49ca-9338-49f98283e79d req-77e39a69-034b-41e2-85eb-6504779fd59a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Processing event network-vif-plugged-3d1404de-38bf-4d1c-960e-bcc14817fcc6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:56:13 np0005480824 podman[290128]: 2025-10-11 03:56:13.309639237 +0000 UTC m=+0.075490466 container create 853ef9828c2641034510aec0a7208fe69a860be7b001e804bc88f476436d7060 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 10 23:56:13 np0005480824 podman[290128]: 2025-10-11 03:56:13.268680938 +0000 UTC m=+0.034532217 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:56:13 np0005480824 systemd[1]: Started libpod-conmon-853ef9828c2641034510aec0a7208fe69a860be7b001e804bc88f476436d7060.scope.
Oct 10 23:56:13 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:56:13 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b41495cd5fd5dfe6fbbf484bdb1ef33aa2f791b391903b330db1487ba73ba735/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:56:13 np0005480824 podman[290128]: 2025-10-11 03:56:13.431360015 +0000 UTC m=+0.197211254 container init 853ef9828c2641034510aec0a7208fe69a860be7b001e804bc88f476436d7060 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:56:13 np0005480824 podman[290128]: 2025-10-11 03:56:13.441618207 +0000 UTC m=+0.207469426 container start 853ef9828c2641034510aec0a7208fe69a860be7b001e804bc88f476436d7060 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:56:13 np0005480824 podman[290141]: 2025-10-11 03:56:13.447285141 +0000 UTC m=+0.105258919 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:56:13 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[290156]: [NOTICE]   (290166) : New worker (290169) forked
Oct 10 23:56:13 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[290156]: [NOTICE]   (290166) : Loading success.
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.577 2 DEBUG nova.compute.manager [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.578 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154973.5781229, 3ccdaa3b-882a-432f-b619-002ded45ac60 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.579 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] VM Started (Lifecycle Event)#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.581 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.585 2 INFO nova.virt.libvirt.driver [-] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Instance spawned successfully.#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.586 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.601 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.612 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.615 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.616 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.616 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.617 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.617 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.617 2 DEBUG nova.virt.libvirt.driver [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.653 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.654 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154973.579022, 3ccdaa3b-882a-432f-b619-002ded45ac60 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.654 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.673 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.678 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760154973.5811086, 3ccdaa3b-882a-432f-b619-002ded45ac60 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.679 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.681 2 INFO nova.compute.manager [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Took 4.14 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.682 2 DEBUG nova.compute.manager [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.705 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.708 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.745 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.767 2 INFO nova.compute.manager [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Took 6.32 seconds to build instance.#033[00m
Oct 10 23:56:13 np0005480824 nova_compute[260089]: 2025-10-11 03:56:13.780 2 DEBUG oslo_concurrency.lockutils [None req-5c77d995-2c1f-4023-82a5-52545ff52a5f 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Oct 10 23:56:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Oct 10 23:56:13 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.578 2 DEBUG oslo_concurrency.lockutils [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Acquiring lock "0403d8e6-23d4-4765-a41f-eed96752c52e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.578 2 DEBUG oslo_concurrency.lockutils [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.578 2 DEBUG oslo_concurrency.lockutils [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Acquiring lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.578 2 DEBUG oslo_concurrency.lockutils [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.579 2 DEBUG oslo_concurrency.lockutils [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.579 2 INFO nova.compute.manager [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Terminating instance#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.580 2 DEBUG nova.compute.manager [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:56:14 np0005480824 kernel: tap7a5e381a-dc (unregistering): left promiscuous mode
Oct 10 23:56:14 np0005480824 NetworkManager[44969]: <info>  [1760154974.6227] device (tap7a5e381a-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:56:14 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:14Z|00184|binding|INFO|Releasing lport 7a5e381a-dccf-47ae-a39a-87d08faf2a0f from this chassis (sb_readonly=0)
Oct 10 23:56:14 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:14Z|00185|binding|INFO|Setting lport 7a5e381a-dccf-47ae-a39a-87d08faf2a0f down in Southbound
Oct 10 23:56:14 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:14Z|00186|binding|INFO|Removing iface tap7a5e381a-dc ovn-installed in OVS
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.644 2 DEBUG nova.compute.manager [req-2323d6f0-bbdf-46f9-80c9-407d462544b9 req-48e77531-df5e-485d-a0b6-a362611908d4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Received event network-changed-7a5e381a-dccf-47ae-a39a-87d08faf2a0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.644 2 DEBUG nova.compute.manager [req-2323d6f0-bbdf-46f9-80c9-407d462544b9 req-48e77531-df5e-485d-a0b6-a362611908d4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Refreshing instance network info cache due to event network-changed-7a5e381a-dccf-47ae-a39a-87d08faf2a0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.644 2 DEBUG oslo_concurrency.lockutils [req-2323d6f0-bbdf-46f9-80c9-407d462544b9 req-48e77531-df5e-485d-a0b6-a362611908d4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.644 2 DEBUG oslo_concurrency.lockutils [req-2323d6f0-bbdf-46f9-80c9-407d462544b9 req-48e77531-df5e-485d-a0b6-a362611908d4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.645 2 DEBUG nova.network.neutron [req-2323d6f0-bbdf-46f9-80c9-407d462544b9 req-48e77531-df5e-485d-a0b6-a362611908d4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Refreshing network info cache for port 7a5e381a-dccf-47ae-a39a-87d08faf2a0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:56:14 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:14.652 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:fc:94 10.100.0.6'], port_security=['fa:16:3e:02:fc:94 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0403d8e6-23d4-4765-a41f-eed96752c52e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-821e091d-e4da-4318-a5fb-3fc44a19fc25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f36ed779ede42228be9ab8544bbf9aa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '51c20c9a-eab4-4ea0-bd83-0a26e2278ce4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d951732-205c-4b86-802d-f010498bd0dc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=7a5e381a-dccf-47ae-a39a-87d08faf2a0f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:56:14 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:14.656 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 7a5e381a-dccf-47ae-a39a-87d08faf2a0f in datapath 821e091d-e4da-4318-a5fb-3fc44a19fc25 unbound from our chassis#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:14 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:14.662 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 821e091d-e4da-4318-a5fb-3fc44a19fc25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:56:14 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:14.664 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c7fc46-c1c4-4edd-b9c1-acf642994543]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:14 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:14.665 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25 namespace which is not needed anymore#033[00m
Oct 10 23:56:14 np0005480824 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Oct 10 23:56:14 np0005480824 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 14.283s CPU time.
Oct 10 23:56:14 np0005480824 systemd-machined[215071]: Machine qemu-19-instance-00000013 terminated.
Oct 10 23:56:14 np0005480824 neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25[289675]: [NOTICE]   (289683) : haproxy version is 2.8.14-c23fe91
Oct 10 23:56:14 np0005480824 neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25[289675]: [NOTICE]   (289683) : path to executable is /usr/sbin/haproxy
Oct 10 23:56:14 np0005480824 neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25[289675]: [WARNING]  (289683) : Exiting Master process...
Oct 10 23:56:14 np0005480824 neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25[289675]: [ALERT]    (289683) : Current worker (289685) exited with code 143 (Terminated)
Oct 10 23:56:14 np0005480824 neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25[289675]: [WARNING]  (289683) : All workers exited. Exiting... (0)
Oct 10 23:56:14 np0005480824 systemd[1]: libpod-251085e9bae4f21141988ab10428f8b7da218242b931bfb5deb524e692721ca9.scope: Deactivated successfully.
Oct 10 23:56:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 348 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 295 KiB/s rd, 2.6 MiB/s wr, 143 op/s
Oct 10 23:56:14 np0005480824 podman[290199]: 2025-10-11 03:56:14.806266062 +0000 UTC m=+0.048033976 container died 251085e9bae4f21141988ab10428f8b7da218242b931bfb5deb524e692721ca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.816 2 INFO nova.virt.libvirt.driver [-] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Instance destroyed successfully.#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.817 2 DEBUG nova.objects.instance [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lazy-loading 'resources' on Instance uuid 0403d8e6-23d4-4765-a41f-eed96752c52e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.830 2 DEBUG nova.virt.libvirt.vif [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:55:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1812578238',display_name='tempest-TestVolumeBackupRestore-server-1812578238',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1812578238',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcZqlWznkTSPt7YqL58kLpY6xBe8Ue8Yu9g5Fx6W9nijrGZzvH0hybC3ENKmePVhJj9AL8vstvMZEi4+ASaw20cil6ZF7IGGtP2ziwcq2zq7ghU3mbyjhm+18aIJfy/yQ==',key_name='tempest-TestVolumeBackupRestore-1282533698',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:55:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f36ed779ede42228be9ab8544bbf9aa',ramdisk_id='',reservation_id='r-03ovkgqk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-1363850772',owner_user_name='tempest-TestVolumeBackupRestore-1363850772-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:55:54Z,user_data=None,user_id='ba815f7813ad434aa05e27f214de0632',uuid=0403d8e6-23d4-4765-a41f-eed96752c52e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "address": "fa:16:3e:02:fc:94", "network": {"id": "821e091d-e4da-4318-a5fb-3fc44a19fc25", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1425502666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f36ed779ede42228be9ab8544bbf9aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a5e381a-dc", "ovs_interfaceid": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.831 2 DEBUG nova.network.os_vif_util [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Converting VIF {"id": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "address": "fa:16:3e:02:fc:94", "network": {"id": "821e091d-e4da-4318-a5fb-3fc44a19fc25", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1425502666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f36ed779ede42228be9ab8544bbf9aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a5e381a-dc", "ovs_interfaceid": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.832 2 DEBUG nova.network.os_vif_util [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:fc:94,bridge_name='br-int',has_traffic_filtering=True,id=7a5e381a-dccf-47ae-a39a-87d08faf2a0f,network=Network(821e091d-e4da-4318-a5fb-3fc44a19fc25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a5e381a-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.832 2 DEBUG os_vif [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:fc:94,bridge_name='br-int',has_traffic_filtering=True,id=7a5e381a-dccf-47ae-a39a-87d08faf2a0f,network=Network(821e091d-e4da-4318-a5fb-3fc44a19fc25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a5e381a-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.835 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a5e381a-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:14 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-251085e9bae4f21141988ab10428f8b7da218242b931bfb5deb524e692721ca9-userdata-shm.mount: Deactivated successfully.
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.849 2 INFO os_vif [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:fc:94,bridge_name='br-int',has_traffic_filtering=True,id=7a5e381a-dccf-47ae-a39a-87d08faf2a0f,network=Network(821e091d-e4da-4318-a5fb-3fc44a19fc25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a5e381a-dc')#033[00m
Oct 10 23:56:14 np0005480824 systemd[1]: var-lib-containers-storage-overlay-baadaa790b11e77690d31b1cef87141affcae2d2cf863950a3fc300dcf869732-merged.mount: Deactivated successfully.
Oct 10 23:56:14 np0005480824 podman[290199]: 2025-10-11 03:56:14.860851063 +0000 UTC m=+0.102618967 container cleanup 251085e9bae4f21141988ab10428f8b7da218242b931bfb5deb524e692721ca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 23:56:14 np0005480824 systemd[1]: libpod-conmon-251085e9bae4f21141988ab10428f8b7da218242b931bfb5deb524e692721ca9.scope: Deactivated successfully.
Oct 10 23:56:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:56:14 np0005480824 podman[290252]: 2025-10-11 03:56:14.93178628 +0000 UTC m=+0.046407538 container remove 251085e9bae4f21141988ab10428f8b7da218242b931bfb5deb524e692721ca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 23:56:14 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:14.951 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[0d6393bf-dbd2-4ae1-8b56-a00dbec75e3a]: (4, ('Sat Oct 11 03:56:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25 (251085e9bae4f21141988ab10428f8b7da218242b931bfb5deb524e692721ca9)\n251085e9bae4f21141988ab10428f8b7da218242b931bfb5deb524e692721ca9\nSat Oct 11 03:56:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25 (251085e9bae4f21141988ab10428f8b7da218242b931bfb5deb524e692721ca9)\n251085e9bae4f21141988ab10428f8b7da218242b931bfb5deb524e692721ca9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:14 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:14.954 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef2c9ac-6a28-4c56-a218-c7cb55cc8952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:14 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:14.957 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap821e091d-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:56:14 np0005480824 kernel: tap821e091d-e0: left promiscuous mode
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:14 np0005480824 nova_compute[260089]: 2025-10-11 03:56:14.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:14 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:14.990 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf59052-382b-4c53-be2c-dca517f75d95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:15 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:15.020 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[17fa0d83-47de-4ba1-a277-c58736483e5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:15 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:15.022 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d8b047-08fd-4ca1-a2e2-554e2a08348b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:15 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:15.040 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ac517f-f7c7-431b-89ce-d7a5a2795559]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442057, 'reachable_time': 15519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290271, 'error': None, 'target': 'ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:15 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:15.043 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-821e091d-e4da-4318-a5fb-3fc44a19fc25 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:56:15 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:15.044 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[7c39aff4-4eed-43e9-93a0-1aee8704cbb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:15 np0005480824 systemd[1]: run-netns-ovnmeta\x2d821e091d\x2de4da\x2d4318\x2da5fb\x2d3fc44a19fc25.mount: Deactivated successfully.
Oct 10 23:56:15 np0005480824 nova_compute[260089]: 2025-10-11 03:56:15.094 2 INFO nova.virt.libvirt.driver [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Deleting instance files /var/lib/nova/instances/0403d8e6-23d4-4765-a41f-eed96752c52e_del#033[00m
Oct 10 23:56:15 np0005480824 nova_compute[260089]: 2025-10-11 03:56:15.096 2 INFO nova.virt.libvirt.driver [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Deletion of /var/lib/nova/instances/0403d8e6-23d4-4765-a41f-eed96752c52e_del complete#033[00m
Oct 10 23:56:15 np0005480824 nova_compute[260089]: 2025-10-11 03:56:15.145 2 INFO nova.compute.manager [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Took 0.56 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:56:15 np0005480824 nova_compute[260089]: 2025-10-11 03:56:15.146 2 DEBUG oslo.service.loopingcall [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:56:15 np0005480824 nova_compute[260089]: 2025-10-11 03:56:15.146 2 DEBUG nova.compute.manager [-] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:56:15 np0005480824 nova_compute[260089]: 2025-10-11 03:56:15.146 2 DEBUG nova.network.neutron [-] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:56:15 np0005480824 nova_compute[260089]: 2025-10-11 03:56:15.971 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154960.9696877, ee0ba1fa-8740-4670-9f6d-b658f89f7f21 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:56:15 np0005480824 nova_compute[260089]: 2025-10-11 03:56:15.972 2 INFO nova.compute.manager [-] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:56:15 np0005480824 nova_compute[260089]: 2025-10-11 03:56:15.994 2 DEBUG nova.compute.manager [None req-2f2af54b-f876-4cb9-b5c5-afd8e70509e4 - - - - - -] [instance: ee0ba1fa-8740-4670-9f6d-b658f89f7f21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:56:16 np0005480824 nova_compute[260089]: 2025-10-11 03:56:16.576 2 DEBUG nova.compute.manager [req-ab06075f-6491-47c0-b354-96ae56b7bd8a req-26cb894f-4fb8-4c9f-849a-c4a2a5b057ad 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Received event network-vif-plugged-3d1404de-38bf-4d1c-960e-bcc14817fcc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:56:16 np0005480824 nova_compute[260089]: 2025-10-11 03:56:16.576 2 DEBUG oslo_concurrency.lockutils [req-ab06075f-6491-47c0-b354-96ae56b7bd8a req-26cb894f-4fb8-4c9f-849a-c4a2a5b057ad 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:16 np0005480824 nova_compute[260089]: 2025-10-11 03:56:16.577 2 DEBUG oslo_concurrency.lockutils [req-ab06075f-6491-47c0-b354-96ae56b7bd8a req-26cb894f-4fb8-4c9f-849a-c4a2a5b057ad 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:16 np0005480824 nova_compute[260089]: 2025-10-11 03:56:16.577 2 DEBUG oslo_concurrency.lockutils [req-ab06075f-6491-47c0-b354-96ae56b7bd8a req-26cb894f-4fb8-4c9f-849a-c4a2a5b057ad 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:16 np0005480824 nova_compute[260089]: 2025-10-11 03:56:16.577 2 DEBUG nova.compute.manager [req-ab06075f-6491-47c0-b354-96ae56b7bd8a req-26cb894f-4fb8-4c9f-849a-c4a2a5b057ad 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] No waiting events found dispatching network-vif-plugged-3d1404de-38bf-4d1c-960e-bcc14817fcc6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:56:16 np0005480824 nova_compute[260089]: 2025-10-11 03:56:16.577 2 WARNING nova.compute.manager [req-ab06075f-6491-47c0-b354-96ae56b7bd8a req-26cb894f-4fb8-4c9f-849a-c4a2a5b057ad 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Received unexpected event network-vif-plugged-3d1404de-38bf-4d1c-960e-bcc14817fcc6 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:56:16 np0005480824 nova_compute[260089]: 2025-10-11 03:56:16.745 2 DEBUG nova.compute.manager [req-20699746-3ad6-48c2-9118-d6bf30cc56d7 req-ca4851c6-d5f5-4ac6-96f7-34f6c3acc23e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Received event network-vif-unplugged-7a5e381a-dccf-47ae-a39a-87d08faf2a0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:56:16 np0005480824 nova_compute[260089]: 2025-10-11 03:56:16.746 2 DEBUG oslo_concurrency.lockutils [req-20699746-3ad6-48c2-9118-d6bf30cc56d7 req-ca4851c6-d5f5-4ac6-96f7-34f6c3acc23e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:16 np0005480824 nova_compute[260089]: 2025-10-11 03:56:16.746 2 DEBUG oslo_concurrency.lockutils [req-20699746-3ad6-48c2-9118-d6bf30cc56d7 req-ca4851c6-d5f5-4ac6-96f7-34f6c3acc23e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:16 np0005480824 nova_compute[260089]: 2025-10-11 03:56:16.746 2 DEBUG oslo_concurrency.lockutils [req-20699746-3ad6-48c2-9118-d6bf30cc56d7 req-ca4851c6-d5f5-4ac6-96f7-34f6c3acc23e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:16 np0005480824 nova_compute[260089]: 2025-10-11 03:56:16.746 2 DEBUG nova.compute.manager [req-20699746-3ad6-48c2-9118-d6bf30cc56d7 req-ca4851c6-d5f5-4ac6-96f7-34f6c3acc23e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] No waiting events found dispatching network-vif-unplugged-7a5e381a-dccf-47ae-a39a-87d08faf2a0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:56:16 np0005480824 nova_compute[260089]: 2025-10-11 03:56:16.746 2 DEBUG nova.compute.manager [req-20699746-3ad6-48c2-9118-d6bf30cc56d7 req-ca4851c6-d5f5-4ac6-96f7-34f6c3acc23e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Received event network-vif-unplugged-7a5e381a-dccf-47ae-a39a-87d08faf2a0f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:56:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 348 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 159 op/s
Oct 10 23:56:16 np0005480824 nova_compute[260089]: 2025-10-11 03:56:16.817 2 DEBUG nova.network.neutron [-] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:56:16 np0005480824 nova_compute[260089]: 2025-10-11 03:56:16.838 2 INFO nova.compute.manager [-] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Took 1.69 seconds to deallocate network for instance.#033[00m
Oct 10 23:56:17 np0005480824 nova_compute[260089]: 2025-10-11 03:56:17.064 2 INFO nova.compute.manager [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Took 0.23 seconds to detach 1 volumes for instance.#033[00m
Oct 10 23:56:17 np0005480824 nova_compute[260089]: 2025-10-11 03:56:17.119 2 DEBUG oslo_concurrency.lockutils [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:17 np0005480824 nova_compute[260089]: 2025-10-11 03:56:17.120 2 DEBUG oslo_concurrency.lockutils [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:17 np0005480824 nova_compute[260089]: 2025-10-11 03:56:17.198 2 DEBUG oslo_concurrency.processutils [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:17 np0005480824 nova_compute[260089]: 2025-10-11 03:56:17.464 2 DEBUG nova.network.neutron [req-2323d6f0-bbdf-46f9-80c9-407d462544b9 req-48e77531-df5e-485d-a0b6-a362611908d4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Updated VIF entry in instance network info cache for port 7a5e381a-dccf-47ae-a39a-87d08faf2a0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:56:17 np0005480824 nova_compute[260089]: 2025-10-11 03:56:17.465 2 DEBUG nova.network.neutron [req-2323d6f0-bbdf-46f9-80c9-407d462544b9 req-48e77531-df5e-485d-a0b6-a362611908d4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Updating instance_info_cache with network_info: [{"id": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "address": "fa:16:3e:02:fc:94", "network": {"id": "821e091d-e4da-4318-a5fb-3fc44a19fc25", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1425502666-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f36ed779ede42228be9ab8544bbf9aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a5e381a-dc", "ovs_interfaceid": "7a5e381a-dccf-47ae-a39a-87d08faf2a0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:56:17 np0005480824 nova_compute[260089]: 2025-10-11 03:56:17.485 2 DEBUG oslo_concurrency.lockutils [req-2323d6f0-bbdf-46f9-80c9-407d462544b9 req-48e77531-df5e-485d-a0b6-a362611908d4 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-0403d8e6-23d4-4765-a41f-eed96752c52e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:56:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:17 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2452962034' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:17 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2452962034' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:56:17 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3276323773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:56:17 np0005480824 nova_compute[260089]: 2025-10-11 03:56:17.654 2 DEBUG oslo_concurrency.processutils [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:17 np0005480824 nova_compute[260089]: 2025-10-11 03:56:17.662 2 DEBUG nova.compute.provider_tree [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:56:17 np0005480824 nova_compute[260089]: 2025-10-11 03:56:17.681 2 DEBUG nova.scheduler.client.report [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:56:17 np0005480824 nova_compute[260089]: 2025-10-11 03:56:17.706 2 DEBUG oslo_concurrency.lockutils [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:17 np0005480824 nova_compute[260089]: 2025-10-11 03:56:17.745 2 INFO nova.scheduler.client.report [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Deleted allocations for instance 0403d8e6-23d4-4765-a41f-eed96752c52e#033[00m
Oct 10 23:56:17 np0005480824 nova_compute[260089]: 2025-10-11 03:56:17.872 2 DEBUG oslo_concurrency.lockutils [None req-e148a70b-1749-462c-804a-1cf14c311ff7 ba815f7813ad434aa05e27f214de0632 5f36ed779ede42228be9ab8544bbf9aa - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.668 2 DEBUG nova.compute.manager [req-7716a3a0-04d9-4a89-8469-010e5a267d85 req-39d062ec-519c-4f10-a578-b94d67b20065 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Received event network-vif-deleted-7a5e381a-dccf-47ae-a39a-87d08faf2a0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.669 2 INFO nova.compute.manager [req-7716a3a0-04d9-4a89-8469-010e5a267d85 req-39d062ec-519c-4f10-a578-b94d67b20065 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Neutron deleted interface 7a5e381a-dccf-47ae-a39a-87d08faf2a0f; detaching it from the instance and deleting it from the info cache#033[00m
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.670 2 DEBUG nova.network.neutron [req-7716a3a0-04d9-4a89-8469-010e5a267d85 req-39d062ec-519c-4f10-a578-b94d67b20065 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.672 2 DEBUG nova.compute.manager [req-7716a3a0-04d9-4a89-8469-010e5a267d85 req-39d062ec-519c-4f10-a578-b94d67b20065 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Detach interface failed, port_id=7a5e381a-dccf-47ae-a39a-87d08faf2a0f, reason: Instance 0403d8e6-23d4-4765-a41f-eed96752c52e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 10 23:56:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 348 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 148 KiB/s wr, 188 op/s
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.845 2 DEBUG nova.compute.manager [req-b1a6397f-6bf7-4c7b-9868-7f9494dfe7eb req-5e8ac42a-7181-4ff1-8a71-9de5dc9d8048 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Received event network-vif-plugged-7a5e381a-dccf-47ae-a39a-87d08faf2a0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.846 2 DEBUG oslo_concurrency.lockutils [req-b1a6397f-6bf7-4c7b-9868-7f9494dfe7eb req-5e8ac42a-7181-4ff1-8a71-9de5dc9d8048 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.847 2 DEBUG oslo_concurrency.lockutils [req-b1a6397f-6bf7-4c7b-9868-7f9494dfe7eb req-5e8ac42a-7181-4ff1-8a71-9de5dc9d8048 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.847 2 DEBUG oslo_concurrency.lockutils [req-b1a6397f-6bf7-4c7b-9868-7f9494dfe7eb req-5e8ac42a-7181-4ff1-8a71-9de5dc9d8048 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "0403d8e6-23d4-4765-a41f-eed96752c52e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.847 2 DEBUG nova.compute.manager [req-b1a6397f-6bf7-4c7b-9868-7f9494dfe7eb req-5e8ac42a-7181-4ff1-8a71-9de5dc9d8048 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] No waiting events found dispatching network-vif-plugged-7a5e381a-dccf-47ae-a39a-87d08faf2a0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.848 2 WARNING nova.compute.manager [req-b1a6397f-6bf7-4c7b-9868-7f9494dfe7eb req-5e8ac42a-7181-4ff1-8a71-9de5dc9d8048 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Received unexpected event network-vif-plugged-7a5e381a-dccf-47ae-a39a-87d08faf2a0f for instance with vm_state deleted and task_state None.#033[00m
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.848 2 DEBUG nova.compute.manager [req-b1a6397f-6bf7-4c7b-9868-7f9494dfe7eb req-5e8ac42a-7181-4ff1-8a71-9de5dc9d8048 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Received event network-changed-3d1404de-38bf-4d1c-960e-bcc14817fcc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.848 2 DEBUG nova.compute.manager [req-b1a6397f-6bf7-4c7b-9868-7f9494dfe7eb req-5e8ac42a-7181-4ff1-8a71-9de5dc9d8048 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Refreshing instance network info cache due to event network-changed-3d1404de-38bf-4d1c-960e-bcc14817fcc6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.849 2 DEBUG oslo_concurrency.lockutils [req-b1a6397f-6bf7-4c7b-9868-7f9494dfe7eb req-5e8ac42a-7181-4ff1-8a71-9de5dc9d8048 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.849 2 DEBUG oslo_concurrency.lockutils [req-b1a6397f-6bf7-4c7b-9868-7f9494dfe7eb req-5e8ac42a-7181-4ff1-8a71-9de5dc9d8048 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:56:18 np0005480824 nova_compute[260089]: 2025-10-11 03:56:18.850 2 DEBUG nova.network.neutron [req-b1a6397f-6bf7-4c7b-9868-7f9494dfe7eb req-5e8ac42a-7181-4ff1-8a71-9de5dc9d8048 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Refreshing network info cache for port 3d1404de-38bf-4d1c-960e-bcc14817fcc6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:56:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Oct 10 23:56:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Oct 10 23:56:18 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Oct 10 23:56:19 np0005480824 nova_compute[260089]: 2025-10-11 03:56:19.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:56:19 np0005480824 nova_compute[260089]: 2025-10-11 03:56:19.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Oct 10 23:56:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Oct 10 23:56:19 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Oct 10 23:56:20 np0005480824 nova_compute[260089]: 2025-10-11 03:56:20.175 2 DEBUG nova.network.neutron [req-b1a6397f-6bf7-4c7b-9868-7f9494dfe7eb req-5e8ac42a-7181-4ff1-8a71-9de5dc9d8048 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Updated VIF entry in instance network info cache for port 3d1404de-38bf-4d1c-960e-bcc14817fcc6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:56:20 np0005480824 nova_compute[260089]: 2025-10-11 03:56:20.176 2 DEBUG nova.network.neutron [req-b1a6397f-6bf7-4c7b-9868-7f9494dfe7eb req-5e8ac42a-7181-4ff1-8a71-9de5dc9d8048 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Updating instance_info_cache with network_info: [{"id": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "address": "fa:16:3e:0f:58:4c", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d1404de-38", "ovs_interfaceid": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:56:20 np0005480824 nova_compute[260089]: 2025-10-11 03:56:20.199 2 DEBUG oslo_concurrency.lockutils [req-b1a6397f-6bf7-4c7b-9868-7f9494dfe7eb req-5e8ac42a-7181-4ff1-8a71-9de5dc9d8048 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:56:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:20 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1601963569' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:20 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1601963569' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:20 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/701002743' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:20 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/701002743' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 348 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 28 KiB/s wr, 223 op/s
Oct 10 23:56:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 167 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 33 KiB/s wr, 353 op/s
Oct 10 23:56:24 np0005480824 nova_compute[260089]: 2025-10-11 03:56:24.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Oct 10 23:56:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Oct 10 23:56:24 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Oct 10 23:56:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3908683138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3908683138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:24 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:24Z|00187|binding|INFO|Releasing lport 039c7668-0b85-4466-9c66-62531405028d from this chassis (sb_readonly=0)
Oct 10 23:56:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 167 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 12 KiB/s wr, 214 op/s
Oct 10 23:56:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:56:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Oct 10 23:56:24 np0005480824 nova_compute[260089]: 2025-10-11 03:56:24.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Oct 10 23:56:24 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Oct 10 23:56:26 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:26Z|00038|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.12
Oct 10 23:56:26 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:26Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:0f:58:4c 10.100.0.12
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:56:26 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 34ffc92f-bbb6-46f9-b7fa-0c91e6fa5d2f does not exist
Oct 10 23:56:26 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 5c6f42c3-553a-486b-a8fc-8cc9eec109f8 does not exist
Oct 10 23:56:26 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev b987db83-e75f-4af2-aee2-1f15e4315395 does not exist
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1422889215' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1422889215' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 520 KiB/s rd, 15 KiB/s wr, 272 op/s
Oct 10 23:56:27 np0005480824 podman[290568]: 2025-10-11 03:56:27.042161836 +0000 UTC m=+0.048088569 container create 22579c743f271b118c75da85e557a663f37d3228d23499aab5ba437f5c9fcef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:56:27 np0005480824 systemd[1]: Started libpod-conmon-22579c743f271b118c75da85e557a663f37d3228d23499aab5ba437f5c9fcef3.scope.
Oct 10 23:56:27 np0005480824 podman[290568]: 2025-10-11 03:56:27.02587019 +0000 UTC m=+0.031796943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:56:27 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:56:27 np0005480824 podman[290568]: 2025-10-11 03:56:27.14388385 +0000 UTC m=+0.149810613 container init 22579c743f271b118c75da85e557a663f37d3228d23499aab5ba437f5c9fcef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:56:27 np0005480824 podman[290568]: 2025-10-11 03:56:27.153542308 +0000 UTC m=+0.159469041 container start 22579c743f271b118c75da85e557a663f37d3228d23499aab5ba437f5c9fcef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 10 23:56:27 np0005480824 podman[290568]: 2025-10-11 03:56:27.156994201 +0000 UTC m=+0.162920984 container attach 22579c743f271b118c75da85e557a663f37d3228d23499aab5ba437f5c9fcef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 23:56:27 np0005480824 laughing_faraday[290584]: 167 167
Oct 10 23:56:27 np0005480824 systemd[1]: libpod-22579c743f271b118c75da85e557a663f37d3228d23499aab5ba437f5c9fcef3.scope: Deactivated successfully.
Oct 10 23:56:27 np0005480824 podman[290568]: 2025-10-11 03:56:27.161220451 +0000 UTC m=+0.167147204 container died 22579c743f271b118c75da85e557a663f37d3228d23499aab5ba437f5c9fcef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 23:56:27 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0bcc3a5f75e2f56138f37eeaa33c817841511f81ba899cb37119f90591759655-merged.mount: Deactivated successfully.
Oct 10 23:56:27 np0005480824 podman[290568]: 2025-10-11 03:56:27.214397548 +0000 UTC m=+0.220324281 container remove 22579c743f271b118c75da85e557a663f37d3228d23499aab5ba437f5c9fcef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:56:27 np0005480824 systemd[1]: libpod-conmon-22579c743f271b118c75da85e557a663f37d3228d23499aab5ba437f5c9fcef3.scope: Deactivated successfully.
Oct 10 23:56:27 np0005480824 podman[290610]: 2025-10-11 03:56:27.41586571 +0000 UTC m=+0.044004541 container create 3dd5ce0c00b60c45a27944dc64d4810cb422057ca3becdd02edd58586b042820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 23:56:27 np0005480824 systemd[1]: Started libpod-conmon-3dd5ce0c00b60c45a27944dc64d4810cb422057ca3becdd02edd58586b042820.scope.
Oct 10 23:56:27 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:56:27 np0005480824 podman[290610]: 2025-10-11 03:56:27.394476025 +0000 UTC m=+0.022614876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:56:27 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9158ab5254d9b4559b983280dd970a575ca56dc389fb7dd4e74324877f421d96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:56:27 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9158ab5254d9b4559b983280dd970a575ca56dc389fb7dd4e74324877f421d96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:56:27 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9158ab5254d9b4559b983280dd970a575ca56dc389fb7dd4e74324877f421d96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:56:27 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9158ab5254d9b4559b983280dd970a575ca56dc389fb7dd4e74324877f421d96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:56:27 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9158ab5254d9b4559b983280dd970a575ca56dc389fb7dd4e74324877f421d96/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:56:27 np0005480824 podman[290610]: 2025-10-11 03:56:27.506571615 +0000 UTC m=+0.134710466 container init 3dd5ce0c00b60c45a27944dc64d4810cb422057ca3becdd02edd58586b042820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 10 23:56:27 np0005480824 podman[290610]: 2025-10-11 03:56:27.517081024 +0000 UTC m=+0.145219865 container start 3dd5ce0c00b60c45a27944dc64d4810cb422057ca3becdd02edd58586b042820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 10 23:56:27 np0005480824 podman[290610]: 2025-10-11 03:56:27.531423903 +0000 UTC m=+0.159562764 container attach 3dd5ce0c00b60c45a27944dc64d4810cb422057ca3becdd02edd58586b042820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 23:56:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:56:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:56:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:56:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:56:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:56:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:56:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:56:27
Oct 10 23:56:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:56:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:56:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['backups', 'default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta']
Oct 10 23:56:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:56:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:56:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:56:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:56:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:56:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:56:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:56:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:56:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:56:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:56:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:56:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Oct 10 23:56:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Oct 10 23:56:28 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Oct 10 23:56:28 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:28Z|00188|binding|INFO|Releasing lport 039c7668-0b85-4466-9c66-62531405028d from this chassis (sb_readonly=0)
Oct 10 23:56:28 np0005480824 pensive_fermat[290626]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:56:28 np0005480824 pensive_fermat[290626]: --> relative data size: 1.0
Oct 10 23:56:28 np0005480824 pensive_fermat[290626]: --> All data devices are unavailable
Oct 10 23:56:28 np0005480824 nova_compute[260089]: 2025-10-11 03:56:28.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:28 np0005480824 systemd[1]: libpod-3dd5ce0c00b60c45a27944dc64d4810cb422057ca3becdd02edd58586b042820.scope: Deactivated successfully.
Oct 10 23:56:28 np0005480824 systemd[1]: libpod-3dd5ce0c00b60c45a27944dc64d4810cb422057ca3becdd02edd58586b042820.scope: Consumed 1.093s CPU time.
Oct 10 23:56:28 np0005480824 podman[290610]: 2025-10-11 03:56:28.694684797 +0000 UTC m=+1.322823668 container died 3dd5ce0c00b60c45a27944dc64d4810cb422057ca3becdd02edd58586b042820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:56:28 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9158ab5254d9b4559b983280dd970a575ca56dc389fb7dd4e74324877f421d96-merged.mount: Deactivated successfully.
Oct 10 23:56:28 np0005480824 podman[290610]: 2025-10-11 03:56:28.763498714 +0000 UTC m=+1.391637565 container remove 3dd5ce0c00b60c45a27944dc64d4810cb422057ca3becdd02edd58586b042820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:56:28 np0005480824 systemd[1]: libpod-conmon-3dd5ce0c00b60c45a27944dc64d4810cb422057ca3becdd02edd58586b042820.scope: Deactivated successfully.
Oct 10 23:56:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 30 KiB/s wr, 216 op/s
Oct 10 23:56:29 np0005480824 nova_compute[260089]: 2025-10-11 03:56:29.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:29 np0005480824 podman[290810]: 2025-10-11 03:56:29.565444345 +0000 UTC m=+0.052336039 container create cf0dafc7502417c0add33d33d37b83401756705059d06633488977a5b56d88eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 10 23:56:29 np0005480824 systemd[1]: Started libpod-conmon-cf0dafc7502417c0add33d33d37b83401756705059d06633488977a5b56d88eb.scope.
Oct 10 23:56:29 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:56:29 np0005480824 podman[290810]: 2025-10-11 03:56:29.540979806 +0000 UTC m=+0.027871510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:56:29 np0005480824 podman[290810]: 2025-10-11 03:56:29.65150454 +0000 UTC m=+0.138396224 container init cf0dafc7502417c0add33d33d37b83401756705059d06633488977a5b56d88eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lalande, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 23:56:29 np0005480824 podman[290810]: 2025-10-11 03:56:29.658421673 +0000 UTC m=+0.145313347 container start cf0dafc7502417c0add33d33d37b83401756705059d06633488977a5b56d88eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lalande, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 10 23:56:29 np0005480824 podman[290810]: 2025-10-11 03:56:29.661741651 +0000 UTC m=+0.148633305 container attach cf0dafc7502417c0add33d33d37b83401756705059d06633488977a5b56d88eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:56:29 np0005480824 elated_lalande[290827]: 167 167
Oct 10 23:56:29 np0005480824 systemd[1]: libpod-cf0dafc7502417c0add33d33d37b83401756705059d06633488977a5b56d88eb.scope: Deactivated successfully.
Oct 10 23:56:29 np0005480824 conmon[290827]: conmon cf0dafc7502417c0add3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cf0dafc7502417c0add33d33d37b83401756705059d06633488977a5b56d88eb.scope/container/memory.events
Oct 10 23:56:29 np0005480824 podman[290832]: 2025-10-11 03:56:29.708665481 +0000 UTC m=+0.031154017 container died cf0dafc7502417c0add33d33d37b83401756705059d06633488977a5b56d88eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 10 23:56:29 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6a86dc0d8302ae5447c719dde3e394679ff4b67b095a756c3fa72b24e6d72500-merged.mount: Deactivated successfully.
Oct 10 23:56:29 np0005480824 podman[290832]: 2025-10-11 03:56:29.754477675 +0000 UTC m=+0.076966151 container remove cf0dafc7502417c0add33d33d37b83401756705059d06633488977a5b56d88eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lalande, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 23:56:29 np0005480824 systemd[1]: libpod-conmon-cf0dafc7502417c0add33d33d37b83401756705059d06633488977a5b56d88eb.scope: Deactivated successfully.
Oct 10 23:56:29 np0005480824 nova_compute[260089]: 2025-10-11 03:56:29.814 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760154974.8121395, 0403d8e6-23d4-4765-a41f-eed96752c52e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:56:29 np0005480824 nova_compute[260089]: 2025-10-11 03:56:29.815 2 INFO nova.compute.manager [-] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:56:29 np0005480824 nova_compute[260089]: 2025-10-11 03:56:29.835 2 DEBUG nova.compute.manager [None req-f5bdf16b-84ae-4444-a52b-1920c1b21023 - - - - - -] [instance: 0403d8e6-23d4-4765-a41f-eed96752c52e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:56:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:56:29 np0005480824 nova_compute[260089]: 2025-10-11 03:56:29.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:30 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:30Z|00040|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.12
Oct 10 23:56:30 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:30Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:0f:58:4c 10.100.0.12
Oct 10 23:56:30 np0005480824 podman[290854]: 2025-10-11 03:56:30.05119029 +0000 UTC m=+0.089613410 container create 9314f313ba8915fdb1a4b7e3447c7ff9b6bc1ac0e85aa5b881448e752884fadf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kapitsa, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 10 23:56:30 np0005480824 podman[290854]: 2025-10-11 03:56:30.015747962 +0000 UTC m=+0.054171142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:56:30 np0005480824 systemd[1]: Started libpod-conmon-9314f313ba8915fdb1a4b7e3447c7ff9b6bc1ac0e85aa5b881448e752884fadf.scope.
Oct 10 23:56:30 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:56:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ac5e0d24006d09f924a1a37a9cee8d6896fef6ab4a562241c392ab35d59323/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:56:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ac5e0d24006d09f924a1a37a9cee8d6896fef6ab4a562241c392ab35d59323/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:56:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ac5e0d24006d09f924a1a37a9cee8d6896fef6ab4a562241c392ab35d59323/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:56:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ac5e0d24006d09f924a1a37a9cee8d6896fef6ab4a562241c392ab35d59323/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:56:30 np0005480824 podman[290854]: 2025-10-11 03:56:30.176473442 +0000 UTC m=+0.214896542 container init 9314f313ba8915fdb1a4b7e3447c7ff9b6bc1ac0e85aa5b881448e752884fadf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 10 23:56:30 np0005480824 podman[290854]: 2025-10-11 03:56:30.193484194 +0000 UTC m=+0.231907274 container start 9314f313ba8915fdb1a4b7e3447c7ff9b6bc1ac0e85aa5b881448e752884fadf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kapitsa, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:56:30 np0005480824 podman[290854]: 2025-10-11 03:56:30.196790693 +0000 UTC m=+0.235213823 container attach 9314f313ba8915fdb1a4b7e3447c7ff9b6bc1ac0e85aa5b881448e752884fadf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kapitsa, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 10 23:56:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 27 KiB/s wr, 194 op/s
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]: {
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:    "0": [
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:        {
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "devices": [
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "/dev/loop3"
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            ],
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_name": "ceph_lv0",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_size": "21470642176",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "name": "ceph_lv0",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "tags": {
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.cluster_name": "ceph",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.crush_device_class": "",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.encrypted": "0",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.osd_id": "0",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.type": "block",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.vdo": "0"
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            },
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "type": "block",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "vg_name": "ceph_vg0"
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:        }
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:    ],
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:    "1": [
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:        {
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "devices": [
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "/dev/loop4"
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            ],
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_name": "ceph_lv1",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_size": "21470642176",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "name": "ceph_lv1",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "tags": {
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.cluster_name": "ceph",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.crush_device_class": "",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.encrypted": "0",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.osd_id": "1",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.type": "block",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.vdo": "0"
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            },
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "type": "block",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "vg_name": "ceph_vg1"
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:        }
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:    ],
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:    "2": [
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:        {
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "devices": [
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "/dev/loop5"
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            ],
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_name": "ceph_lv2",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_size": "21470642176",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "name": "ceph_lv2",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "tags": {
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.cluster_name": "ceph",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.crush_device_class": "",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.encrypted": "0",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.osd_id": "2",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.type": "block",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:                "ceph.vdo": "0"
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            },
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "type": "block",
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:            "vg_name": "ceph_vg2"
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:        }
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]:    ]
Oct 10 23:56:31 np0005480824 cranky_kapitsa[290871]: }
Oct 10 23:56:31 np0005480824 systemd[1]: libpod-9314f313ba8915fdb1a4b7e3447c7ff9b6bc1ac0e85aa5b881448e752884fadf.scope: Deactivated successfully.
Oct 10 23:56:31 np0005480824 podman[290880]: 2025-10-11 03:56:31.102907207 +0000 UTC m=+0.039501705 container died 9314f313ba8915fdb1a4b7e3447c7ff9b6bc1ac0e85aa5b881448e752884fadf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kapitsa, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:56:31 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d3ac5e0d24006d09f924a1a37a9cee8d6896fef6ab4a562241c392ab35d59323-merged.mount: Deactivated successfully.
Oct 10 23:56:31 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:31Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:58:4c 10.100.0.12
Oct 10 23:56:31 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:31Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:58:4c 10.100.0.12
Oct 10 23:56:31 np0005480824 podman[290880]: 2025-10-11 03:56:31.163186362 +0000 UTC m=+0.099780850 container remove 9314f313ba8915fdb1a4b7e3447c7ff9b6bc1ac0e85aa5b881448e752884fadf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:56:31 np0005480824 systemd[1]: libpod-conmon-9314f313ba8915fdb1a4b7e3447c7ff9b6bc1ac0e85aa5b881448e752884fadf.scope: Deactivated successfully.
Oct 10 23:56:31 np0005480824 podman[290887]: 2025-10-11 03:56:31.188444239 +0000 UTC m=+0.092239112 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:56:31 np0005480824 podman[290881]: 2025-10-11 03:56:31.219103214 +0000 UTC m=+0.127557227 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 10 23:56:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Oct 10 23:56:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Oct 10 23:56:31 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Oct 10 23:56:31 np0005480824 podman[291073]: 2025-10-11 03:56:31.989401207 +0000 UTC m=+0.060889641 container create 3c8b015ee2bbc6919351cf97176d6f25cc938138a86d8a62befcd9a62250f4d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:56:32 np0005480824 systemd[1]: Started libpod-conmon-3c8b015ee2bbc6919351cf97176d6f25cc938138a86d8a62befcd9a62250f4d2.scope.
Oct 10 23:56:32 np0005480824 podman[291073]: 2025-10-11 03:56:31.960895213 +0000 UTC m=+0.032383727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:56:32 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:56:32 np0005480824 podman[291073]: 2025-10-11 03:56:32.078900303 +0000 UTC m=+0.150388737 container init 3c8b015ee2bbc6919351cf97176d6f25cc938138a86d8a62befcd9a62250f4d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:56:32 np0005480824 podman[291073]: 2025-10-11 03:56:32.088763346 +0000 UTC m=+0.160251770 container start 3c8b015ee2bbc6919351cf97176d6f25cc938138a86d8a62befcd9a62250f4d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 10 23:56:32 np0005480824 podman[291073]: 2025-10-11 03:56:32.093835816 +0000 UTC m=+0.165324300 container attach 3c8b015ee2bbc6919351cf97176d6f25cc938138a86d8a62befcd9a62250f4d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:56:32 np0005480824 stoic_lumiere[291089]: 167 167
Oct 10 23:56:32 np0005480824 systemd[1]: libpod-3c8b015ee2bbc6919351cf97176d6f25cc938138a86d8a62befcd9a62250f4d2.scope: Deactivated successfully.
Oct 10 23:56:32 np0005480824 podman[291073]: 2025-10-11 03:56:32.095536226 +0000 UTC m=+0.167024920 container died 3c8b015ee2bbc6919351cf97176d6f25cc938138a86d8a62befcd9a62250f4d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:56:32 np0005480824 systemd[1]: var-lib-containers-storage-overlay-70999f377fbd57fcad0a546ba17aca00f615c4c6b04f106306ee8b09bbff7697-merged.mount: Deactivated successfully.
Oct 10 23:56:32 np0005480824 podman[291073]: 2025-10-11 03:56:32.146486261 +0000 UTC m=+0.217974705 container remove 3c8b015ee2bbc6919351cf97176d6f25cc938138a86d8a62befcd9a62250f4d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 10 23:56:32 np0005480824 systemd[1]: libpod-conmon-3c8b015ee2bbc6919351cf97176d6f25cc938138a86d8a62befcd9a62250f4d2.scope: Deactivated successfully.
Oct 10 23:56:32 np0005480824 nova_compute[260089]: 2025-10-11 03:56:32.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:56:32 np0005480824 podman[291113]: 2025-10-11 03:56:32.310215482 +0000 UTC m=+0.046163483 container create 756b25d66735692272376d2df2efb157d6d83fee879467f217587fc19a4a5bda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 10 23:56:32 np0005480824 systemd[1]: Started libpod-conmon-756b25d66735692272376d2df2efb157d6d83fee879467f217587fc19a4a5bda.scope.
Oct 10 23:56:32 np0005480824 podman[291113]: 2025-10-11 03:56:32.290836394 +0000 UTC m=+0.026784435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:56:32 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:56:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4221dca31d8ddabc7bedeb8b536022265bdcb18e398693cb31a5197489c0cfa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:56:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4221dca31d8ddabc7bedeb8b536022265bdcb18e398693cb31a5197489c0cfa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:56:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4221dca31d8ddabc7bedeb8b536022265bdcb18e398693cb31a5197489c0cfa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:56:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4221dca31d8ddabc7bedeb8b536022265bdcb18e398693cb31a5197489c0cfa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:56:32 np0005480824 podman[291113]: 2025-10-11 03:56:32.416668399 +0000 UTC m=+0.152616490 container init 756b25d66735692272376d2df2efb157d6d83fee879467f217587fc19a4a5bda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 10 23:56:32 np0005480824 podman[291113]: 2025-10-11 03:56:32.424419992 +0000 UTC m=+0.160367993 container start 756b25d66735692272376d2df2efb157d6d83fee879467f217587fc19a4a5bda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:56:32 np0005480824 podman[291113]: 2025-10-11 03:56:32.427567587 +0000 UTC m=+0.163515688 container attach 756b25d66735692272376d2df2efb157d6d83fee879467f217587fc19a4a5bda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 10 23:56:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 169 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 950 KiB/s rd, 48 KiB/s wr, 252 op/s
Oct 10 23:56:33 np0005480824 nova_compute[260089]: 2025-10-11 03:56:33.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:56:33 np0005480824 nova_compute[260089]: 2025-10-11 03:56:33.299 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]: {
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "osd_id": 0,
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "type": "bluestore"
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:    },
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "osd_id": 1,
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "type": "bluestore"
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:    },
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "osd_id": 2,
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:        "type": "bluestore"
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]:    }
Oct 10 23:56:33 np0005480824 kind_mccarthy[291129]: }
Oct 10 23:56:33 np0005480824 systemd[1]: libpod-756b25d66735692272376d2df2efb157d6d83fee879467f217587fc19a4a5bda.scope: Deactivated successfully.
Oct 10 23:56:33 np0005480824 podman[291113]: 2025-10-11 03:56:33.405820266 +0000 UTC m=+1.141768307 container died 756b25d66735692272376d2df2efb157d6d83fee879467f217587fc19a4a5bda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct 10 23:56:33 np0005480824 systemd[1]: var-lib-containers-storage-overlay-4221dca31d8ddabc7bedeb8b536022265bdcb18e398693cb31a5197489c0cfa9-merged.mount: Deactivated successfully.
Oct 10 23:56:33 np0005480824 podman[291113]: 2025-10-11 03:56:33.487357104 +0000 UTC m=+1.223305115 container remove 756b25d66735692272376d2df2efb157d6d83fee879467f217587fc19a4a5bda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 10 23:56:33 np0005480824 systemd[1]: libpod-conmon-756b25d66735692272376d2df2efb157d6d83fee879467f217587fc19a4a5bda.scope: Deactivated successfully.
Oct 10 23:56:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:56:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:56:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:56:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:56:33 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev dcc39ac9-16e9-400c-8208-b77c829db881 does not exist
Oct 10 23:56:33 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 11462d00-bce8-4c01-a0b4-edb3be81e97a does not exist
Oct 10 23:56:33 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:56:33 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:56:34 np0005480824 nova_compute[260089]: 2025-10-11 03:56:34.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:34 np0005480824 nova_compute[260089]: 2025-10-11 03:56:34.295 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:56:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 169 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 606 KiB/s rd, 44 KiB/s wr, 178 op/s
Oct 10 23:56:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:56:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3598008189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:56:34 np0005480824 nova_compute[260089]: 2025-10-11 03:56:34.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:56:35 np0005480824 nova_compute[260089]: 2025-10-11 03:56:35.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:56:35 np0005480824 nova_compute[260089]: 2025-10-11 03:56:35.325 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:35 np0005480824 nova_compute[260089]: 2025-10-11 03:56:35.325 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:35 np0005480824 nova_compute[260089]: 2025-10-11 03:56:35.325 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:35 np0005480824 nova_compute[260089]: 2025-10-11 03:56:35.326 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:56:35 np0005480824 nova_compute[260089]: 2025-10-11 03:56:35.326 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:56:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040467395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:56:35 np0005480824 nova_compute[260089]: 2025-10-11 03:56:35.798 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:35 np0005480824 nova_compute[260089]: 2025-10-11 03:56:35.893 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:56:35 np0005480824 nova_compute[260089]: 2025-10-11 03:56:35.894 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:56:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Oct 10 23:56:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Oct 10 23:56:35 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.138 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.139 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4203MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.139 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.140 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4066592249' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4066592249' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.213 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 3ccdaa3b-882a-432f-b619-002ded45ac60 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.213 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.213 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.225 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing inventories for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.242 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updating ProviderTree inventory for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.243 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updating inventory in ProviderTree for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.260 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing aggregate associations for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.277 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing trait associations for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX,HW_CPU_X86_SHA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.309 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:56:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417024131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.763 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.770 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.792 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:56:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 169 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 32 KiB/s wr, 129 op/s
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.818 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:56:36 np0005480824 nova_compute[260089]: 2025-10-11 03:56:36.818 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Oct 10 23:56:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Oct 10 23:56:36 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Oct 10 23:56:37 np0005480824 nova_compute[260089]: 2025-10-11 03:56:37.819 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:56:37 np0005480824 nova_compute[260089]: 2025-10-11 03:56:37.819 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:56:37 np0005480824 nova_compute[260089]: 2025-10-11 03:56:37.819 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:56:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Oct 10 23:56:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Oct 10 23:56:37 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0011062875439538972 of space, bias 1.0, pg target 0.33188626318616915 quantized to 32 (current 32)
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:56:38 np0005480824 nova_compute[260089]: 2025-10-11 03:56:38.099 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:56:38 np0005480824 nova_compute[260089]: 2025-10-11 03:56:38.100 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquired lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:56:38 np0005480824 nova_compute[260089]: 2025-10-11 03:56:38.100 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 10 23:56:38 np0005480824 nova_compute[260089]: 2025-10-11 03:56:38.100 2 DEBUG nova.objects.instance [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3ccdaa3b-882a-432f-b619-002ded45ac60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:56:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3599226149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3599226149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 169 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 13 KiB/s wr, 185 op/s
Oct 10 23:56:39 np0005480824 nova_compute[260089]: 2025-10-11 03:56:39.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:39 np0005480824 podman[291272]: 2025-10-11 03:56:39.104421606 +0000 UTC m=+0.155381055 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 10 23:56:39 np0005480824 nova_compute[260089]: 2025-10-11 03:56:39.168 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Updating instance_info_cache with network_info: [{"id": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "address": "fa:16:3e:0f:58:4c", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d1404de-38", "ovs_interfaceid": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:56:39 np0005480824 nova_compute[260089]: 2025-10-11 03:56:39.199 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Releasing lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:56:39 np0005480824 nova_compute[260089]: 2025-10-11 03:56:39.200 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 10 23:56:39 np0005480824 nova_compute[260089]: 2025-10-11 03:56:39.201 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:56:39 np0005480824 nova_compute[260089]: 2025-10-11 03:56:39.201 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:56:39 np0005480824 nova_compute[260089]: 2025-10-11 03:56:39.202 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:56:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3755968979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3755968979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:56:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Oct 10 23:56:39 np0005480824 nova_compute[260089]: 2025-10-11 03:56:39.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Oct 10 23:56:39 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Oct 10 23:56:40 np0005480824 nova_compute[260089]: 2025-10-11 03:56:40.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:56:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 169 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 16 KiB/s wr, 229 op/s
Oct 10 23:56:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2961469662' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2961469662' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 169 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 11 KiB/s wr, 289 op/s
Oct 10 23:56:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3104822290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3104822290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:44 np0005480824 podman[291299]: 2025-10-11 03:56:44.015484081 +0000 UTC m=+0.065074339 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 10 23:56:44 np0005480824 nova_compute[260089]: 2025-10-11 03:56:44.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 169 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 8.7 KiB/s wr, 220 op/s
Oct 10 23:56:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384702493' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384702493' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:56:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Oct 10 23:56:44 np0005480824 nova_compute[260089]: 2025-10-11 03:56:44.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Oct 10 23:56:44 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Oct 10 23:56:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1939509380' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1939509380' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:56:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3205004730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:56:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 12 KiB/s wr, 154 op/s
Oct 10 23:56:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Oct 10 23:56:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Oct 10 23:56:47 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Oct 10 23:56:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2215792622' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2215792622' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Oct 10 23:56:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Oct 10 23:56:48 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Oct 10 23:56:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 10 KiB/s wr, 122 op/s
Oct 10 23:56:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Oct 10 23:56:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Oct 10 23:56:49 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Oct 10 23:56:49 np0005480824 nova_compute[260089]: 2025-10-11 03:56:49.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3746059705' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3746059705' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:49 np0005480824 nova_compute[260089]: 2025-10-11 03:56:49.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:56:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Oct 10 23:56:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Oct 10 23:56:50 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Oct 10 23:56:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 2.5 KiB/s wr, 115 op/s
Oct 10 23:56:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Oct 10 23:56:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Oct 10 23:56:51 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Oct 10 23:56:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:51.430 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:56:51 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:51.431 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:56:51 np0005480824 nova_compute[260089]: 2025-10-11 03:56:51.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Oct 10 23:56:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Oct 10 23:56:52 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Oct 10 23:56:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664500803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664500803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 760 KiB/s rd, 7.7 KiB/s wr, 192 op/s
Oct 10 23:56:54 np0005480824 nova_compute[260089]: 2025-10-11 03:56:54.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:54 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:56:54.434 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:56:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 531 KiB/s rd, 5.4 KiB/s wr, 134 op/s
Oct 10 23:56:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:56:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Oct 10 23:56:54 np0005480824 nova_compute[260089]: 2025-10-11 03:56:54.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Oct 10 23:56:54 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Oct 10 23:56:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 522 KiB/s rd, 11 KiB/s wr, 148 op/s
Oct 10 23:56:57 np0005480824 nova_compute[260089]: 2025-10-11 03:56:57.308 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:57 np0005480824 nova_compute[260089]: 2025-10-11 03:56:57.308 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:57 np0005480824 nova_compute[260089]: 2025-10-11 03:56:57.325 2 DEBUG nova.compute.manager [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:56:57 np0005480824 nova_compute[260089]: 2025-10-11 03:56:57.398 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:57 np0005480824 nova_compute[260089]: 2025-10-11 03:56:57.398 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:57 np0005480824 nova_compute[260089]: 2025-10-11 03:56:57.406 2 DEBUG nova.virt.hardware [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:56:57 np0005480824 nova_compute[260089]: 2025-10-11 03:56:57.406 2 INFO nova.compute.claims [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:56:57 np0005480824 nova_compute[260089]: 2025-10-11 03:56:57.519 2 DEBUG oslo_concurrency.processutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3688240631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3688240631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:56:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:56:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:56:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:56:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:56:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:56:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:56:57 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3833107904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:56:57 np0005480824 nova_compute[260089]: 2025-10-11 03:56:57.956 2 DEBUG oslo_concurrency.processutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:57 np0005480824 nova_compute[260089]: 2025-10-11 03:56:57.962 2 DEBUG nova.compute.provider_tree [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:56:57 np0005480824 nova_compute[260089]: 2025-10-11 03:56:57.983 2 DEBUG nova.scheduler.client.report [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.016 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.017 2 DEBUG nova.compute.manager [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.106 2 DEBUG nova.compute.manager [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.106 2 DEBUG nova.network.neutron [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.136 2 INFO nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.160 2 DEBUG nova.compute.manager [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.214 2 INFO nova.virt.block_device [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Booting with volume 8d0713f8-daa0-4bb8-b4e0-7449c6cecb8d at /dev/vda#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.350 2 DEBUG os_brick.utils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.353 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.368 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.368 676 DEBUG oslo.privsep.daemon [-] privsep: reply[d632bdb2-a4e0-490e-bd94-4263a648b64f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.370 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.381 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.382 676 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c106a7-2b57-4678-ac31-b8a6a20653bc]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.384 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.396 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.396 676 DEBUG oslo.privsep.daemon [-] privsep: reply[8c09b3d6-139f-42b8-bee5-5f461498e604]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.398 676 DEBUG oslo.privsep.daemon [-] privsep: reply[306764eb-9383-4c54-bf8e-81b335b0d0ce]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.399 2 DEBUG oslo_concurrency.processutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.431 2 DEBUG oslo_concurrency.processutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.435 2 DEBUG os_brick.initiator.connectors.lightos [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.435 2 DEBUG os_brick.initiator.connectors.lightos [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.436 2 DEBUG os_brick.initiator.connectors.lightos [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.436 2 DEBUG os_brick.utils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] <== get_connector_properties: return (85ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.437 2 DEBUG nova.virt.block_device [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Updating existing volume attachment record: 8939903f-ef21-475b-b65a-640e2b436ecf _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:56:58 np0005480824 nova_compute[260089]: 2025-10-11 03:56:58.533 2 DEBUG nova.policy [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38ebc503771e417aaf1f3aea0c835994', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55d21391a321476eb133317b3402b0f0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:56:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 430 KiB/s rd, 16 KiB/s wr, 151 op/s
Oct 10 23:56:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:56:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1505118920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:56:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:56:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1505118920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:56:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:56:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1115260657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:56:59 np0005480824 nova_compute[260089]: 2025-10-11 03:56:59.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:59 np0005480824 ovn_controller[152667]: 2025-10-11T03:56:59Z|00189|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Oct 10 23:56:59 np0005480824 nova_compute[260089]: 2025-10-11 03:56:59.264 2 DEBUG nova.network.neutron [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Successfully created port: 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:56:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:56:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2876268939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:56:59 np0005480824 nova_compute[260089]: 2025-10-11 03:56:59.385 2 DEBUG nova.compute.manager [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:56:59 np0005480824 nova_compute[260089]: 2025-10-11 03:56:59.387 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:56:59 np0005480824 nova_compute[260089]: 2025-10-11 03:56:59.387 2 INFO nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Creating image(s)#033[00m
Oct 10 23:56:59 np0005480824 nova_compute[260089]: 2025-10-11 03:56:59.388 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 10 23:56:59 np0005480824 nova_compute[260089]: 2025-10-11 03:56:59.388 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Ensure instance console log exists: /var/lib/nova/instances/8468f5dd-633a-4b6d-a205-ba75e8e070bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:56:59 np0005480824 nova_compute[260089]: 2025-10-11 03:56:59.389 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:56:59 np0005480824 nova_compute[260089]: 2025-10-11 03:56:59.389 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:56:59 np0005480824 nova_compute[260089]: 2025-10-11 03:56:59.390 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:56:59 np0005480824 nova_compute[260089]: 2025-10-11 03:56:59.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:56:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:56:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Oct 10 23:56:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Oct 10 23:56:59 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Oct 10 23:56:59 np0005480824 nova_compute[260089]: 2025-10-11 03:56:59.994 2 DEBUG nova.network.neutron [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Successfully updated port: 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.011 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "refresh_cache-8468f5dd-633a-4b6d-a205-ba75e8e070bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.012 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquired lock "refresh_cache-8468f5dd-633a-4b6d-a205-ba75e8e070bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.012 2 DEBUG nova.network.neutron [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.109 2 DEBUG nova.compute.manager [req-04d4e75a-0eae-46fd-af50-9de9160160e0 req-9908b627-c1cd-4772-b8c1-aaf7ce036481 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Received event network-changed-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.110 2 DEBUG nova.compute.manager [req-04d4e75a-0eae-46fd-af50-9de9160160e0 req-9908b627-c1cd-4772-b8c1-aaf7ce036481 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Refreshing instance network info cache due to event network-changed-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.111 2 DEBUG oslo_concurrency.lockutils [req-04d4e75a-0eae-46fd-af50-9de9160160e0 req-9908b627-c1cd-4772-b8c1-aaf7ce036481 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-8468f5dd-633a-4b6d-a205-ba75e8e070bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.161 2 DEBUG nova.network.neutron [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:57:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/101392575' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/101392575' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.669 2 DEBUG nova.network.neutron [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Updating instance_info_cache with network_info: [{"id": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "address": "fa:16:3e:de:c7:00", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479fe4cd-8b", "ovs_interfaceid": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.692 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Releasing lock "refresh_cache-8468f5dd-633a-4b6d-a205-ba75e8e070bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.693 2 DEBUG nova.compute.manager [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Instance network_info: |[{"id": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "address": "fa:16:3e:de:c7:00", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479fe4cd-8b", "ovs_interfaceid": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.693 2 DEBUG oslo_concurrency.lockutils [req-04d4e75a-0eae-46fd-af50-9de9160160e0 req-9908b627-c1cd-4772-b8c1-aaf7ce036481 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-8468f5dd-633a-4b6d-a205-ba75e8e070bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.694 2 DEBUG nova.network.neutron [req-04d4e75a-0eae-46fd-af50-9de9160160e0 req-9908b627-c1cd-4772-b8c1-aaf7ce036481 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Refreshing network info cache for port 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.697 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Start _get_guest_xml network_info=[{"id": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "address": "fa:16:3e:de:c7:00", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479fe4cd-8b", "ovs_interfaceid": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '8939903f-ef21-475b-b65a-640e2b436ecf', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-8d0713f8-daa0-4bb8-b4e0-7449c6cecb8d', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '8d0713f8-daa0-4bb8-b4e0-7449c6cecb8d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '8468f5dd-633a-4b6d-a205-ba75e8e070bb', 'attached_at': '', 'detached_at': '', 'volume_id': '8d0713f8-daa0-4bb8-b4e0-7449c6cecb8d', 'serial': '8d0713f8-daa0-4bb8-b4e0-7449c6cecb8d'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.704 2 WARNING nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.714 2 DEBUG nova.virt.libvirt.host [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.715 2 DEBUG nova.virt.libvirt.host [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.720 2 DEBUG nova.virt.libvirt.host [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.721 2 DEBUG nova.virt.libvirt.host [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.722 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.722 2 DEBUG nova.virt.hardware [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.723 2 DEBUG nova.virt.hardware [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.723 2 DEBUG nova.virt.hardware [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.723 2 DEBUG nova.virt.hardware [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.724 2 DEBUG nova.virt.hardware [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.724 2 DEBUG nova.virt.hardware [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.724 2 DEBUG nova.virt.hardware [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.725 2 DEBUG nova.virt.hardware [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.725 2 DEBUG nova.virt.hardware [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.725 2 DEBUG nova.virt.hardware [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.726 2 DEBUG nova.virt.hardware [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.756 2 DEBUG nova.storage.rbd_utils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 8468f5dd-633a-4b6d-a205-ba75e8e070bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:57:00 np0005480824 nova_compute[260089]: 2025-10-11 03:57:00.762 2 DEBUG oslo_concurrency.processutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:57:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 12 KiB/s wr, 49 op/s
Oct 10 23:57:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Oct 10 23:57:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Oct 10 23:57:00 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Oct 10 23:57:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:57:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169451125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.267 2 DEBUG oslo_concurrency.processutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.295 2 DEBUG nova.virt.libvirt.vif [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:56:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-84294639',display_name='tempest-TestVolumeBootPattern-server-84294639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-84294639',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHKqCtFesGlIN9DGdSuPEGCilj3bKmCIyQ2Hx4tQRLuRoOqWjhIRAgPC71aK1tfMSZbOh/7KRfo7uhOOwgBdYVdW77mjMG+sfmvlDoQnrLmEWQMeSschoC2XBAsdgkOOQ==',key_name='tempest-TestVolumeBootPattern-1691748970',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-0t350jqn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:56:58Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=8468f5dd-633a-4b6d-a205-ba75e8e070bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "address": "fa:16:3e:de:c7:00", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479fe4cd-8b", "ovs_interfaceid": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.296 2 DEBUG nova.network.os_vif_util [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "address": "fa:16:3e:de:c7:00", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479fe4cd-8b", "ovs_interfaceid": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.298 2 DEBUG nova.network.os_vif_util [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:c7:00,bridge_name='br-int',has_traffic_filtering=True,id=479fe4cd-8bd2-48dd-a7ca-e39f24e57b10,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479fe4cd-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.300 2 DEBUG nova.objects.instance [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8468f5dd-633a-4b6d-a205-ba75e8e070bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.317 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  <uuid>8468f5dd-633a-4b6d-a205-ba75e8e070bb</uuid>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  <name>instance-00000015</name>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <nova:name>tempest-TestVolumeBootPattern-server-84294639</nova:name>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:57:00</nova:creationTime>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:        <nova:user uuid="38ebc503771e417aaf1f3aea0c835994">tempest-TestVolumeBootPattern-739984652-project-member</nova:user>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:        <nova:project uuid="55d21391a321476eb133317b3402b0f0">tempest-TestVolumeBootPattern-739984652</nova:project>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:        <nova:port uuid="479fe4cd-8bd2-48dd-a7ca-e39f24e57b10">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <entry name="serial">8468f5dd-633a-4b6d-a205-ba75e8e070bb</entry>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <entry name="uuid">8468f5dd-633a-4b6d-a205-ba75e8e070bb</entry>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/8468f5dd-633a-4b6d-a205-ba75e8e070bb_disk.config">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-8d0713f8-daa0-4bb8-b4e0-7449c6cecb8d">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <serial>8d0713f8-daa0-4bb8-b4e0-7449c6cecb8d</serial>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:de:c7:00"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <target dev="tap479fe4cd-8b"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/8468f5dd-633a-4b6d-a205-ba75e8e070bb/console.log" append="off"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:57:01 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:57:01 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:57:01 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:57:01 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.320 2 DEBUG nova.compute.manager [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Preparing to wait for external event network-vif-plugged-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.321 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.321 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.322 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.324 2 DEBUG nova.virt.libvirt.vif [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:56:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-84294639',display_name='tempest-TestVolumeBootPattern-server-84294639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-84294639',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHKqCtFesGlIN9DGdSuPEGCilj3bKmCIyQ2Hx4tQRLuRoOqWjhIRAgPC71aK1tfMSZbOh/7KRfo7uhOOwgBdYVdW77mjMG+sfmvlDoQnrLmEWQMeSschoC2XBAsdgkOOQ==',key_name='tempest-TestVolumeBootPattern-1691748970',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-0t350jqn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:56:58Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=8468f5dd-633a-4b6d-a205-ba75e8e070bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "address": "fa:16:3e:de:c7:00", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479fe4cd-8b", "ovs_interfaceid": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.325 2 DEBUG nova.network.os_vif_util [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "address": "fa:16:3e:de:c7:00", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479fe4cd-8b", "ovs_interfaceid": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.326 2 DEBUG nova.network.os_vif_util [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:c7:00,bridge_name='br-int',has_traffic_filtering=True,id=479fe4cd-8bd2-48dd-a7ca-e39f24e57b10,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479fe4cd-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.327 2 DEBUG os_vif [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:c7:00,bridge_name='br-int',has_traffic_filtering=True,id=479fe4cd-8bd2-48dd-a7ca-e39f24e57b10,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479fe4cd-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.330 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap479fe4cd-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap479fe4cd-8b, col_values=(('external_ids', {'iface-id': '479fe4cd-8bd2-48dd-a7ca-e39f24e57b10', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:c7:00', 'vm-uuid': '8468f5dd-633a-4b6d-a205-ba75e8e070bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:57:01 np0005480824 NetworkManager[44969]: <info>  [1760155021.3395] manager: (tap479fe4cd-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.348 2 INFO os_vif [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:c7:00,bridge_name='br-int',has_traffic_filtering=True,id=479fe4cd-8bd2-48dd-a7ca-e39f24e57b10,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479fe4cd-8b')#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.409 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.410 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.410 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] No VIF found with MAC fa:16:3e:de:c7:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.411 2 INFO nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Using config drive#033[00m
Oct 10 23:57:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/943800574' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/943800574' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:01 np0005480824 nova_compute[260089]: 2025-10-11 03:57:01.449 2 DEBUG nova.storage.rbd_utils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 8468f5dd-633a-4b6d-a205-ba75e8e070bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:57:02 np0005480824 podman[291409]: 2025-10-11 03:57:02.029811308 +0000 UTC m=+0.089024465 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 10 23:57:02 np0005480824 podman[291410]: 2025-10-11 03:57:02.065884011 +0000 UTC m=+0.115146853 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.165 2 DEBUG nova.network.neutron [req-04d4e75a-0eae-46fd-af50-9de9160160e0 req-9908b627-c1cd-4772-b8c1-aaf7ce036481 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Updated VIF entry in instance network info cache for port 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.166 2 DEBUG nova.network.neutron [req-04d4e75a-0eae-46fd-af50-9de9160160e0 req-9908b627-c1cd-4772-b8c1-aaf7ce036481 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Updating instance_info_cache with network_info: [{"id": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "address": "fa:16:3e:de:c7:00", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479fe4cd-8b", "ovs_interfaceid": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.188 2 DEBUG oslo_concurrency.lockutils [req-04d4e75a-0eae-46fd-af50-9de9160160e0 req-9908b627-c1cd-4772-b8c1-aaf7ce036481 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-8468f5dd-633a-4b6d-a205-ba75e8e070bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.232 2 INFO nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Creating config drive at /var/lib/nova/instances/8468f5dd-633a-4b6d-a205-ba75e8e070bb/disk.config#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.241 2 DEBUG oslo_concurrency.processutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8468f5dd-633a-4b6d-a205-ba75e8e070bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprdzyy4d7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.387 2 DEBUG oslo_concurrency.processutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8468f5dd-633a-4b6d-a205-ba75e8e070bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprdzyy4d7" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.434 2 DEBUG nova.storage.rbd_utils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] rbd image 8468f5dd-633a-4b6d-a205-ba75e8e070bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.439 2 DEBUG oslo_concurrency.processutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8468f5dd-633a-4b6d-a205-ba75e8e070bb/disk.config 8468f5dd-633a-4b6d-a205-ba75e8e070bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.641 2 DEBUG oslo_concurrency.processutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8468f5dd-633a-4b6d-a205-ba75e8e070bb/disk.config 8468f5dd-633a-4b6d-a205-ba75e8e070bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.642 2 INFO nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Deleting local config drive /var/lib/nova/instances/8468f5dd-633a-4b6d-a205-ba75e8e070bb/disk.config because it was imported into RBD.#033[00m
Oct 10 23:57:02 np0005480824 NetworkManager[44969]: <info>  [1760155022.7122] manager: (tap479fe4cd-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Oct 10 23:57:02 np0005480824 kernel: tap479fe4cd-8b: entered promiscuous mode
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:02 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:02Z|00190|binding|INFO|Claiming lport 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 for this chassis.
Oct 10 23:57:02 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:02Z|00191|binding|INFO|479fe4cd-8bd2-48dd-a7ca-e39f24e57b10: Claiming fa:16:3e:de:c7:00 10.100.0.9
Oct 10 23:57:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:02.723 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:c7:00 10.100.0.9'], port_security=['fa:16:3e:de:c7:00 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8468f5dd-633a-4b6d-a205-ba75e8e070bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d21391a321476eb133317b3402b0f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '48328b99-2dfb-4da6-bd97-8cd4f810b350', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d98e64fb-092d-4777-b741-426f3e849bc3, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=479fe4cd-8bd2-48dd-a7ca-e39f24e57b10) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:57:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:02.724 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 in datapath 359720eb-a957-4bcd-b9b2-3cf7dad947e4 bound to our chassis#033[00m
Oct 10 23:57:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:02.724 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 359720eb-a957-4bcd-b9b2-3cf7dad947e4#033[00m
Oct 10 23:57:02 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:02Z|00192|binding|INFO|Setting lport 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 ovn-installed in OVS
Oct 10 23:57:02 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:02Z|00193|binding|INFO|Setting lport 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 up in Southbound
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:02.741 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[13dd8c34-d67a-44e0-a045-a41b4776c797]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1662227181' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1662227181' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:02 np0005480824 systemd-machined[215071]: New machine qemu-21-instance-00000015.
Oct 10 23:57:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:57:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2435632356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:57:02 np0005480824 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Oct 10 23:57:02 np0005480824 systemd-udevd[291505]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:57:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:02.786 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1d89db-04db-4454-98ea-b5b26c564369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:02.790 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[a879e266-d5e9-48d0-8fa7-1f394386536b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:02 np0005480824 NetworkManager[44969]: <info>  [1760155022.7997] device (tap479fe4cd-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:57:02 np0005480824 NetworkManager[44969]: <info>  [1760155022.8030] device (tap479fe4cd-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:57:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 15 KiB/s wr, 156 op/s
Oct 10 23:57:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:02.827 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[adbbb0e5-c0f8-477f-8999-52435302eb95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:02.858 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[286c6c8e-f01b-4388-969e-3e350e3ea007]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap359720eb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:90:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444025, 'reachable_time': 32126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291515, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:02.876 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c71ea981-e586-4aac-8aed-be577338af91]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap359720eb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444040, 'tstamp': 444040}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291517, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap359720eb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444044, 'tstamp': 444044}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291517, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:02.878 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359720eb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:02.883 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap359720eb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:57:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:02.883 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:57:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:02.884 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap359720eb-a0, col_values=(('external_ids', {'iface-id': '039c7668-0b85-4466-9c66-62531405028d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:57:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:02.884 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.910 2 DEBUG nova.compute.manager [req-e66c5c50-b8dd-4dbd-8b88-155a924c37b2 req-c7a46d87-7f7f-44c0-bb90-0e4fc4fb6baa 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Received event network-vif-plugged-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.910 2 DEBUG oslo_concurrency.lockutils [req-e66c5c50-b8dd-4dbd-8b88-155a924c37b2 req-c7a46d87-7f7f-44c0-bb90-0e4fc4fb6baa 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.910 2 DEBUG oslo_concurrency.lockutils [req-e66c5c50-b8dd-4dbd-8b88-155a924c37b2 req-c7a46d87-7f7f-44c0-bb90-0e4fc4fb6baa 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.911 2 DEBUG oslo_concurrency.lockutils [req-e66c5c50-b8dd-4dbd-8b88-155a924c37b2 req-c7a46d87-7f7f-44c0-bb90-0e4fc4fb6baa 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:02 np0005480824 nova_compute[260089]: 2025-10-11 03:57:02.911 2 DEBUG nova.compute.manager [req-e66c5c50-b8dd-4dbd-8b88-155a924c37b2 req-c7a46d87-7f7f-44c0-bb90-0e4fc4fb6baa 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Processing event network-vif-plugged-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:57:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Oct 10 23:57:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Oct 10 23:57:03 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.717 2 DEBUG nova.compute.manager [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.719 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155023.7167447, 8468f5dd-633a-4b6d-a205-ba75e8e070bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.719 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] VM Started (Lifecycle Event)#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.726 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.732 2 INFO nova.virt.libvirt.driver [-] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Instance spawned successfully.#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.732 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.739 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.744 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.760 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.761 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.761 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.762 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.763 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.764 2 DEBUG nova.virt.libvirt.driver [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.773 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.774 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155023.716954, 8468f5dd-633a-4b6d-a205-ba75e8e070bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.775 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.797 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.802 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155023.724517, 8468f5dd-633a-4b6d-a205-ba75e8e070bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.802 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.846 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.853 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.861 2 INFO nova.compute.manager [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Took 4.48 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.862 2 DEBUG nova.compute.manager [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.872 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.920 2 INFO nova.compute.manager [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Took 6.55 seconds to build instance.#033[00m
Oct 10 23:57:03 np0005480824 nova_compute[260089]: 2025-10-11 03:57:03.937 2 DEBUG oslo_concurrency.lockutils [None req-60c214b0-0e2d-4690-80e7-ae9e5f4b8e84 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3545288313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3545288313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Oct 10 23:57:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Oct 10 23:57:04 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Oct 10 23:57:04 np0005480824 nova_compute[260089]: 2025-10-11 03:57:04.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 5.7 KiB/s wr, 171 op/s
Oct 10 23:57:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:57:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Oct 10 23:57:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Oct 10 23:57:05 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Oct 10 23:57:05 np0005480824 nova_compute[260089]: 2025-10-11 03:57:05.102 2 DEBUG nova.compute.manager [req-4acb5969-c210-45ff-be6d-9baaf77727fe req-688e53b6-bd78-4806-b10c-6822aa3d0ba1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Received event network-vif-plugged-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:57:05 np0005480824 nova_compute[260089]: 2025-10-11 03:57:05.103 2 DEBUG oslo_concurrency.lockutils [req-4acb5969-c210-45ff-be6d-9baaf77727fe req-688e53b6-bd78-4806-b10c-6822aa3d0ba1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:05 np0005480824 nova_compute[260089]: 2025-10-11 03:57:05.103 2 DEBUG oslo_concurrency.lockutils [req-4acb5969-c210-45ff-be6d-9baaf77727fe req-688e53b6-bd78-4806-b10c-6822aa3d0ba1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:05 np0005480824 nova_compute[260089]: 2025-10-11 03:57:05.104 2 DEBUG oslo_concurrency.lockutils [req-4acb5969-c210-45ff-be6d-9baaf77727fe req-688e53b6-bd78-4806-b10c-6822aa3d0ba1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:05 np0005480824 nova_compute[260089]: 2025-10-11 03:57:05.104 2 DEBUG nova.compute.manager [req-4acb5969-c210-45ff-be6d-9baaf77727fe req-688e53b6-bd78-4806-b10c-6822aa3d0ba1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] No waiting events found dispatching network-vif-plugged-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:57:05 np0005480824 nova_compute[260089]: 2025-10-11 03:57:05.105 2 WARNING nova.compute.manager [req-4acb5969-c210-45ff-be6d-9baaf77727fe req-688e53b6-bd78-4806-b10c-6822aa3d0ba1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Received unexpected event network-vif-plugged-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:57:06 np0005480824 nova_compute[260089]: 2025-10-11 03:57:06.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 8.7 KiB/s wr, 297 op/s
Oct 10 23:57:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:57:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2085650994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:57:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Oct 10 23:57:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Oct 10 23:57:08 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Oct 10 23:57:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 31 KiB/s wr, 294 op/s
Oct 10 23:57:08 np0005480824 nova_compute[260089]: 2025-10-11 03:57:08.852 2 DEBUG nova.compute.manager [req-151a30c5-f045-45a4-8d29-b431e87ba110 req-66e18ee0-e781-4e4b-bc66-0f22e75beb3b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Received event network-changed-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:57:08 np0005480824 nova_compute[260089]: 2025-10-11 03:57:08.853 2 DEBUG nova.compute.manager [req-151a30c5-f045-45a4-8d29-b431e87ba110 req-66e18ee0-e781-4e4b-bc66-0f22e75beb3b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Refreshing instance network info cache due to event network-changed-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:57:08 np0005480824 nova_compute[260089]: 2025-10-11 03:57:08.853 2 DEBUG oslo_concurrency.lockutils [req-151a30c5-f045-45a4-8d29-b431e87ba110 req-66e18ee0-e781-4e4b-bc66-0f22e75beb3b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-8468f5dd-633a-4b6d-a205-ba75e8e070bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:57:08 np0005480824 nova_compute[260089]: 2025-10-11 03:57:08.853 2 DEBUG oslo_concurrency.lockutils [req-151a30c5-f045-45a4-8d29-b431e87ba110 req-66e18ee0-e781-4e4b-bc66-0f22e75beb3b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-8468f5dd-633a-4b6d-a205-ba75e8e070bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:57:08 np0005480824 nova_compute[260089]: 2025-10-11 03:57:08.853 2 DEBUG nova.network.neutron [req-151a30c5-f045-45a4-8d29-b431e87ba110 req-66e18ee0-e781-4e4b-bc66-0f22e75beb3b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Refreshing network info cache for port 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:57:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Oct 10 23:57:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Oct 10 23:57:09 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Oct 10 23:57:09 np0005480824 nova_compute[260089]: 2025-10-11 03:57:09.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:57:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Oct 10 23:57:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Oct 10 23:57:09 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Oct 10 23:57:10 np0005480824 nova_compute[260089]: 2025-10-11 03:57:10.036 2 DEBUG nova.network.neutron [req-151a30c5-f045-45a4-8d29-b431e87ba110 req-66e18ee0-e781-4e4b-bc66-0f22e75beb3b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Updated VIF entry in instance network info cache for port 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:57:10 np0005480824 nova_compute[260089]: 2025-10-11 03:57:10.036 2 DEBUG nova.network.neutron [req-151a30c5-f045-45a4-8d29-b431e87ba110 req-66e18ee0-e781-4e4b-bc66-0f22e75beb3b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Updating instance_info_cache with network_info: [{"id": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "address": "fa:16:3e:de:c7:00", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479fe4cd-8b", "ovs_interfaceid": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:57:10 np0005480824 podman[291560]: 2025-10-11 03:57:10.040168193 +0000 UTC m=+0.095801476 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Oct 10 23:57:10 np0005480824 nova_compute[260089]: 2025-10-11 03:57:10.057 2 DEBUG oslo_concurrency.lockutils [req-151a30c5-f045-45a4-8d29-b431e87ba110 req-66e18ee0-e781-4e4b-bc66-0f22e75beb3b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-8468f5dd-633a-4b6d-a205-ba75e8e070bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:57:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:10.502 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:10.503 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:10.504 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 31 KiB/s wr, 295 op/s
Oct 10 23:57:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Oct 10 23:57:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Oct 10 23:57:11 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Oct 10 23:57:11 np0005480824 nova_compute[260089]: 2025-10-11 03:57:11.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4289232654' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4289232654' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 6.5 KiB/s wr, 150 op/s
Oct 10 23:57:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Oct 10 23:57:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Oct 10 23:57:14 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Oct 10 23:57:14 np0005480824 nova_compute[260089]: 2025-10-11 03:57:14.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:57:14 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4218673777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:57:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 5.4 KiB/s wr, 124 op/s
Oct 10 23:57:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:57:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Oct 10 23:57:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Oct 10 23:57:14 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Oct 10 23:57:14 np0005480824 podman[291587]: 2025-10-11 03:57:14.999098201 +0000 UTC m=+0.061275000 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 23:57:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Oct 10 23:57:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Oct 10 23:57:15 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Oct 10 23:57:16 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:16Z|00044|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.9
Oct 10 23:57:16 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:16Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:de:c7:00 10.100.0.9
Oct 10 23:57:16 np0005480824 nova_compute[260089]: 2025-10-11 03:57:16.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 169 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 7.8 KiB/s wr, 190 op/s
Oct 10 23:57:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Oct 10 23:57:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Oct 10 23:57:17 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Oct 10 23:57:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:18 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/187003826' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:18 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/187003826' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 183 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 215 op/s
Oct 10 23:57:19 np0005480824 nova_compute[260089]: 2025-10-11 03:57:19.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Oct 10 23:57:19 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:19Z|00046|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.9
Oct 10 23:57:19 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:19Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:de:c7:00 10.100.0.9
Oct 10 23:57:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Oct 10 23:57:19 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Oct 10 23:57:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3179927142' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3179927142' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:57:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Oct 10 23:57:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Oct 10 23:57:20 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Oct 10 23:57:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 183 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.2 MiB/s wr, 210 op/s
Oct 10 23:57:21 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:21Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:c7:00 10.100.0.9
Oct 10 23:57:21 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:21Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:c7:00 10.100.0.9
Oct 10 23:57:21 np0005480824 nova_compute[260089]: 2025-10-11 03:57:21.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/658628930' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/658628930' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:57:22 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2977095406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:57:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 187 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.1 MiB/s wr, 229 op/s
Oct 10 23:57:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Oct 10 23:57:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Oct 10 23:57:23 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Oct 10 23:57:24 np0005480824 nova_compute[260089]: 2025-10-11 03:57:24.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Oct 10 23:57:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Oct 10 23:57:24 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Oct 10 23:57:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1470707307' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1470707307' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 187 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 236 KiB/s rd, 176 KiB/s wr, 130 op/s
Oct 10 23:57:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:57:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Oct 10 23:57:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Oct 10 23:57:25 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Oct 10 23:57:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Oct 10 23:57:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Oct 10 23:57:26 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Oct 10 23:57:26 np0005480824 nova_compute[260089]: 2025-10-11 03:57:26.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2566823057' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2566823057' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 187 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 8.0 KiB/s wr, 82 op/s
Oct 10 23:57:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:57:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:57:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:57:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:57:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:57:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:57:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:57:27
Oct 10 23:57:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:57:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:57:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.control', 'backups', 'default.rgw.log']
Oct 10 23:57:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:57:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:57:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:57:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:57:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:57:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:57:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:57:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:57:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:57:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:57:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:57:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 187 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 28 KiB/s wr, 79 op/s
Oct 10 23:57:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:57:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/694808304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:57:29 np0005480824 nova_compute[260089]: 2025-10-11 03:57:29.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:57:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3673594757' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:57:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:57:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Oct 10 23:57:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Oct 10 23:57:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Oct 10 23:57:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 187 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 26 KiB/s wr, 74 op/s
Oct 10 23:57:31 np0005480824 nova_compute[260089]: 2025-10-11 03:57:31.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Oct 10 23:57:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Oct 10 23:57:31 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Oct 10 23:57:32 np0005480824 nova_compute[260089]: 2025-10-11 03:57:32.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:57:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Oct 10 23:57:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Oct 10 23:57:32 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Oct 10 23:57:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 187 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 25 KiB/s wr, 104 op/s
Oct 10 23:57:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2811797482' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2811797482' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:33 np0005480824 podman[291609]: 2025-10-11 03:57:33.033049742 +0000 UTC m=+0.085052452 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 23:57:33 np0005480824 podman[291610]: 2025-10-11 03:57:33.059980069 +0000 UTC m=+0.104938553 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3029161313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:57:34 np0005480824 nova_compute[260089]: 2025-10-11 03:57:34.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:34 np0005480824 nova_compute[260089]: 2025-10-11 03:57:34.294 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:57:34 np0005480824 nova_compute[260089]: 2025-10-11 03:57:34.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:57:34 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 60e86fd6-f597-4c55-959f-9b366594667b does not exist
Oct 10 23:57:34 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev a7f0d58f-d85d-4dc2-b48f-701e5b2b1c4c does not exist
Oct 10 23:57:34 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 53bac8c5-09c4-44cf-b6c0-cc9b4ebce1c9 does not exist
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:57:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 187 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 6.5 KiB/s wr, 114 op/s
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Oct 10 23:57:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Oct 10 23:57:35 np0005480824 nova_compute[260089]: 2025-10-11 03:57:35.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:57:35 np0005480824 podman[291925]: 2025-10-11 03:57:35.266975191 +0000 UTC m=+0.027693426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:57:35 np0005480824 podman[291925]: 2025-10-11 03:57:35.408262902 +0000 UTC m=+0.168981137 container create 31f7a941fa348b5b10aacda35f35b9c006a7cfe90e7aecb30024aad584eb87b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 10 23:57:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:57:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:57:35 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:57:35 np0005480824 systemd[1]: Started libpod-conmon-31f7a941fa348b5b10aacda35f35b9c006a7cfe90e7aecb30024aad584eb87b0.scope.
Oct 10 23:57:35 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:57:35 np0005480824 podman[291925]: 2025-10-11 03:57:35.685377234 +0000 UTC m=+0.446095469 container init 31f7a941fa348b5b10aacda35f35b9c006a7cfe90e7aecb30024aad584eb87b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:57:35 np0005480824 podman[291925]: 2025-10-11 03:57:35.695241577 +0000 UTC m=+0.455959802 container start 31f7a941fa348b5b10aacda35f35b9c006a7cfe90e7aecb30024aad584eb87b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 23:57:35 np0005480824 podman[291925]: 2025-10-11 03:57:35.70047326 +0000 UTC m=+0.461191495 container attach 31f7a941fa348b5b10aacda35f35b9c006a7cfe90e7aecb30024aad584eb87b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:57:35 np0005480824 modest_elgamal[291941]: 167 167
Oct 10 23:57:35 np0005480824 systemd[1]: libpod-31f7a941fa348b5b10aacda35f35b9c006a7cfe90e7aecb30024aad584eb87b0.scope: Deactivated successfully.
Oct 10 23:57:35 np0005480824 podman[291925]: 2025-10-11 03:57:35.702238563 +0000 UTC m=+0.462956778 container died 31f7a941fa348b5b10aacda35f35b9c006a7cfe90e7aecb30024aad584eb87b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:57:35 np0005480824 systemd[1]: var-lib-containers-storage-overlay-87d810b947fd814ac98799757bcbd5ccd8b75ba1ca95057315fcc931b814b060-merged.mount: Deactivated successfully.
Oct 10 23:57:35 np0005480824 podman[291925]: 2025-10-11 03:57:35.789773072 +0000 UTC m=+0.550491327 container remove 31f7a941fa348b5b10aacda35f35b9c006a7cfe90e7aecb30024aad584eb87b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 10 23:57:35 np0005480824 systemd[1]: libpod-conmon-31f7a941fa348b5b10aacda35f35b9c006a7cfe90e7aecb30024aad584eb87b0.scope: Deactivated successfully.
Oct 10 23:57:35 np0005480824 podman[291967]: 2025-10-11 03:57:35.967461653 +0000 UTC m=+0.044747339 container create 7a5d71f673194feed3c3ddd4f4c0818ef1a2804cf5044b43a858b8602621493c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 10 23:57:36 np0005480824 systemd[1]: Started libpod-conmon-7a5d71f673194feed3c3ddd4f4c0818ef1a2804cf5044b43a858b8602621493c.scope.
Oct 10 23:57:36 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:57:36 np0005480824 podman[291967]: 2025-10-11 03:57:35.949197942 +0000 UTC m=+0.026483648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:57:36 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbaa07d3e2e0691980a71a7a90c6babcd24f98e99db886d4d3616a555a18bae3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:57:36 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbaa07d3e2e0691980a71a7a90c6babcd24f98e99db886d4d3616a555a18bae3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:57:36 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbaa07d3e2e0691980a71a7a90c6babcd24f98e99db886d4d3616a555a18bae3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:57:36 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbaa07d3e2e0691980a71a7a90c6babcd24f98e99db886d4d3616a555a18bae3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:57:36 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbaa07d3e2e0691980a71a7a90c6babcd24f98e99db886d4d3616a555a18bae3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:57:36 np0005480824 podman[291967]: 2025-10-11 03:57:36.066879824 +0000 UTC m=+0.144165520 container init 7a5d71f673194feed3c3ddd4f4c0818ef1a2804cf5044b43a858b8602621493c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 23:57:36 np0005480824 podman[291967]: 2025-10-11 03:57:36.07984225 +0000 UTC m=+0.157127936 container start 7a5d71f673194feed3c3ddd4f4c0818ef1a2804cf5044b43a858b8602621493c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pascal, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 23:57:36 np0005480824 podman[291967]: 2025-10-11 03:57:36.08487998 +0000 UTC m=+0.162165686 container attach 7a5d71f673194feed3c3ddd4f4c0818ef1a2804cf5044b43a858b8602621493c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 10 23:57:36 np0005480824 nova_compute[260089]: 2025-10-11 03:57:36.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:57:36 np0005480824 nova_compute[260089]: 2025-10-11 03:57:36.298 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:57:36 np0005480824 nova_compute[260089]: 2025-10-11 03:57:36.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:57:36 np0005480824 nova_compute[260089]: 2025-10-11 03:57:36.323 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:36 np0005480824 nova_compute[260089]: 2025-10-11 03:57:36.323 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:36 np0005480824 nova_compute[260089]: 2025-10-11 03:57:36.323 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:36 np0005480824 nova_compute[260089]: 2025-10-11 03:57:36.324 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:57:36 np0005480824 nova_compute[260089]: 2025-10-11 03:57:36.324 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:57:36 np0005480824 nova_compute[260089]: 2025-10-11 03:57:36.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Oct 10 23:57:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Oct 10 23:57:36 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Oct 10 23:57:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:57:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/104287535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:57:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 2 active+clean+snaptrim, 14 active+clean+snaptrim_wait, 305 active+clean; 187 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 8.8 KiB/s rd, 2.0 KiB/s wr, 17 op/s
Oct 10 23:57:36 np0005480824 nova_compute[260089]: 2025-10-11 03:57:36.863 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:57:36 np0005480824 nova_compute[260089]: 2025-10-11 03:57:36.981 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:57:36 np0005480824 nova_compute[260089]: 2025-10-11 03:57:36.982 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:57:36 np0005480824 nova_compute[260089]: 2025-10-11 03:57:36.988 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:57:36 np0005480824 nova_compute[260089]: 2025-10-11 03:57:36.988 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 23:57:37 np0005480824 elegant_pascal[291984]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:57:37 np0005480824 elegant_pascal[291984]: --> relative data size: 1.0
Oct 10 23:57:37 np0005480824 elegant_pascal[291984]: --> All data devices are unavailable
Oct 10 23:57:37 np0005480824 podman[291967]: 2025-10-11 03:57:37.196231746 +0000 UTC m=+1.273517432 container died 7a5d71f673194feed3c3ddd4f4c0818ef1a2804cf5044b43a858b8602621493c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pascal, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 10 23:57:37 np0005480824 systemd[1]: libpod-7a5d71f673194feed3c3ddd4f4c0818ef1a2804cf5044b43a858b8602621493c.scope: Deactivated successfully.
Oct 10 23:57:37 np0005480824 systemd[1]: libpod-7a5d71f673194feed3c3ddd4f4c0818ef1a2804cf5044b43a858b8602621493c.scope: Consumed 1.027s CPU time.
Oct 10 23:57:37 np0005480824 nova_compute[260089]: 2025-10-11 03:57:37.207 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:57:37 np0005480824 nova_compute[260089]: 2025-10-11 03:57:37.208 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4025MB free_disk=59.98798751831055GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:57:37 np0005480824 nova_compute[260089]: 2025-10-11 03:57:37.209 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:37 np0005480824 nova_compute[260089]: 2025-10-11 03:57:37.209 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:37 np0005480824 systemd[1]: var-lib-containers-storage-overlay-dbaa07d3e2e0691980a71a7a90c6babcd24f98e99db886d4d3616a555a18bae3-merged.mount: Deactivated successfully.
Oct 10 23:57:37 np0005480824 podman[291967]: 2025-10-11 03:57:37.271996938 +0000 UTC m=+1.349282624 container remove 7a5d71f673194feed3c3ddd4f4c0818ef1a2804cf5044b43a858b8602621493c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pascal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:57:37 np0005480824 systemd[1]: libpod-conmon-7a5d71f673194feed3c3ddd4f4c0818ef1a2804cf5044b43a858b8602621493c.scope: Deactivated successfully.
Oct 10 23:57:37 np0005480824 nova_compute[260089]: 2025-10-11 03:57:37.329 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 3ccdaa3b-882a-432f-b619-002ded45ac60 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:57:37 np0005480824 nova_compute[260089]: 2025-10-11 03:57:37.331 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 8468f5dd-633a-4b6d-a205-ba75e8e070bb actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:57:37 np0005480824 nova_compute[260089]: 2025-10-11 03:57:37.331 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:57:37 np0005480824 nova_compute[260089]: 2025-10-11 03:57:37.331 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:57:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/526750688' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/526750688' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:37 np0005480824 nova_compute[260089]: 2025-10-11 03:57:37.392 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:57:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:57:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/435942151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:57:37 np0005480824 nova_compute[260089]: 2025-10-11 03:57:37.833 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:57:37 np0005480824 nova_compute[260089]: 2025-10-11 03:57:37.838 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:57:37 np0005480824 nova_compute[260089]: 2025-10-11 03:57:37.859 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:57:37 np0005480824 nova_compute[260089]: 2025-10-11 03:57:37.878 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:57:37 np0005480824 nova_compute[260089]: 2025-10-11 03:57:37.879 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:37 np0005480824 podman[292212]: 2025-10-11 03:57:37.909255265 +0000 UTC m=+0.043152822 container create 1768ced4d771d9492981da4e2138f4dce5a896b305c395160aef40175e791bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 23:57:37 np0005480824 systemd[1]: Started libpod-conmon-1768ced4d771d9492981da4e2138f4dce5a896b305c395160aef40175e791bfd.scope.
Oct 10 23:57:37 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:57:37 np0005480824 podman[292212]: 2025-10-11 03:57:37.975250035 +0000 UTC m=+0.109147612 container init 1768ced4d771d9492981da4e2138f4dce5a896b305c395160aef40175e791bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 10 23:57:37 np0005480824 podman[292212]: 2025-10-11 03:57:37.981448551 +0000 UTC m=+0.115346108 container start 1768ced4d771d9492981da4e2138f4dce5a896b305c395160aef40175e791bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 23:57:37 np0005480824 podman[292212]: 2025-10-11 03:57:37.889992689 +0000 UTC m=+0.023890276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:57:37 np0005480824 sharp_proskuriakova[292228]: 167 167
Oct 10 23:57:37 np0005480824 systemd[1]: libpod-1768ced4d771d9492981da4e2138f4dce5a896b305c395160aef40175e791bfd.scope: Deactivated successfully.
Oct 10 23:57:37 np0005480824 conmon[292228]: conmon 1768ced4d771d9492981 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1768ced4d771d9492981da4e2138f4dce5a896b305c395160aef40175e791bfd.scope/container/memory.events
Oct 10 23:57:37 np0005480824 podman[292212]: 2025-10-11 03:57:37.99366932 +0000 UTC m=+0.127566917 container attach 1768ced4d771d9492981da4e2138f4dce5a896b305c395160aef40175e791bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:57:37 np0005480824 podman[292212]: 2025-10-11 03:57:37.994203373 +0000 UTC m=+0.128100940 container died 1768ced4d771d9492981da4e2138f4dce5a896b305c395160aef40175e791bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:57:38 np0005480824 systemd[1]: var-lib-containers-storage-overlay-21969a143d83d8a4ef13d3bb26587914fc1011ce2358083a246e75d0da93c012-merged.mount: Deactivated successfully.
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.896484502181415e-06 of space, bias 1.0, pg target 0.0014689453506544247 quantized to 32 (current 32)
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0012174441012241975 of space, bias 1.0, pg target 0.36523323036725924 quantized to 32 (current 32)
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.1446327407696817e-06 of space, bias 1.0, pg target 0.0003433898222309045 quantized to 32 (current 32)
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:57:38 np0005480824 podman[292212]: 2025-10-11 03:57:38.077318178 +0000 UTC m=+0.211215775 container remove 1768ced4d771d9492981da4e2138f4dce5a896b305c395160aef40175e791bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:57:38 np0005480824 systemd[1]: libpod-conmon-1768ced4d771d9492981da4e2138f4dce5a896b305c395160aef40175e791bfd.scope: Deactivated successfully.
Oct 10 23:57:38 np0005480824 podman[292252]: 2025-10-11 03:57:38.265434286 +0000 UTC m=+0.036460393 container create e21319aaad45419da2b01bd75de034f407b46e44048331badc4480d9639410d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heyrovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 10 23:57:38 np0005480824 systemd[1]: Started libpod-conmon-e21319aaad45419da2b01bd75de034f407b46e44048331badc4480d9639410d7.scope.
Oct 10 23:57:38 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:38Z|00194|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Oct 10 23:57:38 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:57:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c926a160fb5fa9fa254460ef612de3e1073ba63d77b321c8c857d82441bf4471/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:57:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c926a160fb5fa9fa254460ef612de3e1073ba63d77b321c8c857d82441bf4471/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:57:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c926a160fb5fa9fa254460ef612de3e1073ba63d77b321c8c857d82441bf4471/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:57:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c926a160fb5fa9fa254460ef612de3e1073ba63d77b321c8c857d82441bf4471/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:57:38 np0005480824 podman[292252]: 2025-10-11 03:57:38.251568258 +0000 UTC m=+0.022594375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:57:38 np0005480824 podman[292252]: 2025-10-11 03:57:38.351624694 +0000 UTC m=+0.122650851 container init e21319aaad45419da2b01bd75de034f407b46e44048331badc4480d9639410d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 23:57:38 np0005480824 podman[292252]: 2025-10-11 03:57:38.36203163 +0000 UTC m=+0.133057747 container start e21319aaad45419da2b01bd75de034f407b46e44048331badc4480d9639410d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:57:38 np0005480824 podman[292252]: 2025-10-11 03:57:38.367658503 +0000 UTC m=+0.138684640 container attach e21319aaad45419da2b01bd75de034f407b46e44048331badc4480d9639410d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:57:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Oct 10 23:57:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Oct 10 23:57:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Oct 10 23:57:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 2 active+clean+snaptrim, 14 active+clean+snaptrim_wait, 305 active+clean; 187 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 10 KiB/s wr, 222 op/s
Oct 10 23:57:38 np0005480824 nova_compute[260089]: 2025-10-11 03:57:38.879 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:57:38 np0005480824 nova_compute[260089]: 2025-10-11 03:57:38.880 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:57:38 np0005480824 nova_compute[260089]: 2025-10-11 03:57:38.880 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:57:39 np0005480824 nova_compute[260089]: 2025-10-11 03:57:39.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]: {
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:    "0": [
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:        {
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "devices": [
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "/dev/loop3"
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            ],
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_name": "ceph_lv0",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_size": "21470642176",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "name": "ceph_lv0",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "tags": {
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.cluster_name": "ceph",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.crush_device_class": "",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.encrypted": "0",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.osd_id": "0",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.type": "block",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.vdo": "0"
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            },
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "type": "block",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "vg_name": "ceph_vg0"
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:        }
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:    ],
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:    "1": [
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:        {
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "devices": [
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "/dev/loop4"
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            ],
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_name": "ceph_lv1",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_size": "21470642176",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "name": "ceph_lv1",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "tags": {
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.cluster_name": "ceph",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.crush_device_class": "",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.encrypted": "0",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.osd_id": "1",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.type": "block",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.vdo": "0"
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            },
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "type": "block",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "vg_name": "ceph_vg1"
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:        }
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:    ],
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:    "2": [
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:        {
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "devices": [
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "/dev/loop5"
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            ],
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_name": "ceph_lv2",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_size": "21470642176",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "name": "ceph_lv2",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "tags": {
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.cluster_name": "ceph",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.crush_device_class": "",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.encrypted": "0",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.osd_id": "2",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.type": "block",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:                "ceph.vdo": "0"
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            },
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "type": "block",
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:            "vg_name": "ceph_vg2"
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:        }
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]:    ]
Oct 10 23:57:39 np0005480824 focused_heyrovsky[292267]: }
Oct 10 23:57:39 np0005480824 systemd[1]: libpod-e21319aaad45419da2b01bd75de034f407b46e44048331badc4480d9639410d7.scope: Deactivated successfully.
Oct 10 23:57:39 np0005480824 podman[292252]: 2025-10-11 03:57:39.181220818 +0000 UTC m=+0.952246935 container died e21319aaad45419da2b01bd75de034f407b46e44048331badc4480d9639410d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heyrovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 10 23:57:39 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c926a160fb5fa9fa254460ef612de3e1073ba63d77b321c8c857d82441bf4471-merged.mount: Deactivated successfully.
Oct 10 23:57:39 np0005480824 podman[292252]: 2025-10-11 03:57:39.265152883 +0000 UTC m=+1.036179010 container remove e21319aaad45419da2b01bd75de034f407b46e44048331badc4480d9639410d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:57:39 np0005480824 systemd[1]: libpod-conmon-e21319aaad45419da2b01bd75de034f407b46e44048331badc4480d9639410d7.scope: Deactivated successfully.
Oct 10 23:57:39 np0005480824 nova_compute[260089]: 2025-10-11 03:57:39.512 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:57:39 np0005480824 nova_compute[260089]: 2025-10-11 03:57:39.512 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquired lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:57:39 np0005480824 nova_compute[260089]: 2025-10-11 03:57:39.512 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 10 23:57:39 np0005480824 nova_compute[260089]: 2025-10-11 03:57:39.513 2 DEBUG nova.objects.instance [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3ccdaa3b-882a-432f-b619-002ded45ac60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:57:39 np0005480824 podman[292428]: 2025-10-11 03:57:39.901262833 +0000 UTC m=+0.044384610 container create 4e23b39bb539c8cdc52293bbbda0a8390fa1d1ec0d0ac79521904480690945dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:57:39 np0005480824 systemd[1]: Started libpod-conmon-4e23b39bb539c8cdc52293bbbda0a8390fa1d1ec0d0ac79521904480690945dd.scope.
Oct 10 23:57:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:57:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Oct 10 23:57:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Oct 10 23:57:39 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:57:39 np0005480824 podman[292428]: 2025-10-11 03:57:39.883211196 +0000 UTC m=+0.026333003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:57:39 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Oct 10 23:57:39 np0005480824 podman[292428]: 2025-10-11 03:57:39.998028541 +0000 UTC m=+0.141150658 container init 4e23b39bb539c8cdc52293bbbda0a8390fa1d1ec0d0ac79521904480690945dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 23:57:40 np0005480824 podman[292428]: 2025-10-11 03:57:40.007748511 +0000 UTC m=+0.150870298 container start 4e23b39bb539c8cdc52293bbbda0a8390fa1d1ec0d0ac79521904480690945dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 23:57:40 np0005480824 podman[292428]: 2025-10-11 03:57:40.011708114 +0000 UTC m=+0.154829921 container attach 4e23b39bb539c8cdc52293bbbda0a8390fa1d1ec0d0ac79521904480690945dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 10 23:57:40 np0005480824 stoic_shannon[292445]: 167 167
Oct 10 23:57:40 np0005480824 systemd[1]: libpod-4e23b39bb539c8cdc52293bbbda0a8390fa1d1ec0d0ac79521904480690945dd.scope: Deactivated successfully.
Oct 10 23:57:40 np0005480824 podman[292428]: 2025-10-11 03:57:40.016301633 +0000 UTC m=+0.159423420 container died 4e23b39bb539c8cdc52293bbbda0a8390fa1d1ec0d0ac79521904480690945dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:57:40 np0005480824 systemd[1]: var-lib-containers-storage-overlay-51edc77e080b43e918e78ba59fca0c3248a86fb814fa3c8296458c1dd7131e34-merged.mount: Deactivated successfully.
Oct 10 23:57:40 np0005480824 podman[292428]: 2025-10-11 03:57:40.058637054 +0000 UTC m=+0.201758841 container remove 4e23b39bb539c8cdc52293bbbda0a8390fa1d1ec0d0ac79521904480690945dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:57:40 np0005480824 systemd[1]: libpod-conmon-4e23b39bb539c8cdc52293bbbda0a8390fa1d1ec0d0ac79521904480690945dd.scope: Deactivated successfully.
Oct 10 23:57:40 np0005480824 podman[292464]: 2025-10-11 03:57:40.221565396 +0000 UTC m=+0.106594761 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 10 23:57:40 np0005480824 podman[292494]: 2025-10-11 03:57:40.264714816 +0000 UTC m=+0.055065162 container create 2a91a0fdb197117cc347d586c8fdbda07f8673b3e74279c4f3b4b1e8368ee514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:57:40 np0005480824 systemd[1]: Started libpod-conmon-2a91a0fdb197117cc347d586c8fdbda07f8673b3e74279c4f3b4b1e8368ee514.scope.
Oct 10 23:57:40 np0005480824 podman[292494]: 2025-10-11 03:57:40.24074824 +0000 UTC m=+0.031098616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:57:40 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:57:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12b3a9ca9902364e1d458c9c006662b4556361ea8706dea74dee78234ee481a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:57:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12b3a9ca9902364e1d458c9c006662b4556361ea8706dea74dee78234ee481a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:57:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12b3a9ca9902364e1d458c9c006662b4556361ea8706dea74dee78234ee481a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:57:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12b3a9ca9902364e1d458c9c006662b4556361ea8706dea74dee78234ee481a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:57:40 np0005480824 podman[292494]: 2025-10-11 03:57:40.376411967 +0000 UTC m=+0.166762333 container init 2a91a0fdb197117cc347d586c8fdbda07f8673b3e74279c4f3b4b1e8368ee514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_buck, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:57:40 np0005480824 podman[292494]: 2025-10-11 03:57:40.388017091 +0000 UTC m=+0.178367437 container start 2a91a0fdb197117cc347d586c8fdbda07f8673b3e74279c4f3b4b1e8368ee514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 10 23:57:40 np0005480824 podman[292494]: 2025-10-11 03:57:40.395051318 +0000 UTC m=+0.185401684 container attach 2a91a0fdb197117cc347d586c8fdbda07f8673b3e74279c4f3b4b1e8368ee514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:57:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:57:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4020510179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:57:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 2 active+clean+snaptrim, 14 active+clean+snaptrim_wait, 305 active+clean; 187 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 7.5 KiB/s wr, 165 op/s
Oct 10 23:57:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.083 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Updating instance_info_cache with network_info: [{"id": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "address": "fa:16:3e:0f:58:4c", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d1404de-38", "ovs_interfaceid": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.104 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Releasing lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.105 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.105 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:57:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Oct 10 23:57:41 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.319 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.409 2 DEBUG nova.compute.manager [req-38c252e9-692d-41bc-99b4-8c6f64a08bb4 req-3f8f78f9-f604-4f17-b3d8-a5ec8aa8de97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Received event network-changed-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.410 2 DEBUG nova.compute.manager [req-38c252e9-692d-41bc-99b4-8c6f64a08bb4 req-3f8f78f9-f604-4f17-b3d8-a5ec8aa8de97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Refreshing instance network info cache due to event network-changed-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.410 2 DEBUG oslo_concurrency.lockutils [req-38c252e9-692d-41bc-99b4-8c6f64a08bb4 req-3f8f78f9-f604-4f17-b3d8-a5ec8aa8de97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-8468f5dd-633a-4b6d-a205-ba75e8e070bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.410 2 DEBUG oslo_concurrency.lockutils [req-38c252e9-692d-41bc-99b4-8c6f64a08bb4 req-3f8f78f9-f604-4f17-b3d8-a5ec8aa8de97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-8468f5dd-633a-4b6d-a205-ba75e8e070bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.411 2 DEBUG nova.network.neutron [req-38c252e9-692d-41bc-99b4-8c6f64a08bb4 req-3f8f78f9-f604-4f17-b3d8-a5ec8aa8de97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Refreshing network info cache for port 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.460 2 DEBUG oslo_concurrency.lockutils [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.460 2 DEBUG oslo_concurrency.lockutils [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.460 2 DEBUG oslo_concurrency.lockutils [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.460 2 DEBUG oslo_concurrency.lockutils [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.461 2 DEBUG oslo_concurrency.lockutils [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.462 2 INFO nova.compute.manager [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Terminating instance#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.463 2 DEBUG nova.compute.manager [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]: {
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "osd_id": 0,
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "type": "bluestore"
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:    },
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "osd_id": 1,
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "type": "bluestore"
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:    },
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "osd_id": 2,
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:        "type": "bluestore"
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]:    }
Oct 10 23:57:41 np0005480824 flamboyant_buck[292514]: }
Oct 10 23:57:41 np0005480824 systemd[1]: libpod-2a91a0fdb197117cc347d586c8fdbda07f8673b3e74279c4f3b4b1e8368ee514.scope: Deactivated successfully.
Oct 10 23:57:41 np0005480824 systemd[1]: libpod-2a91a0fdb197117cc347d586c8fdbda07f8673b3e74279c4f3b4b1e8368ee514.scope: Consumed 1.107s CPU time.
Oct 10 23:57:41 np0005480824 podman[292548]: 2025-10-11 03:57:41.550888006 +0000 UTC m=+0.024753066 container died 2a91a0fdb197117cc347d586c8fdbda07f8673b3e74279c4f3b4b1e8368ee514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 10 23:57:41 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a12b3a9ca9902364e1d458c9c006662b4556361ea8706dea74dee78234ee481a-merged.mount: Deactivated successfully.
Oct 10 23:57:41 np0005480824 kernel: tap479fe4cd-8b (unregistering): left promiscuous mode
Oct 10 23:57:41 np0005480824 NetworkManager[44969]: <info>  [1760155061.7372] device (tap479fe4cd-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:57:41 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:41Z|00195|binding|INFO|Releasing lport 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 from this chassis (sb_readonly=0)
Oct 10 23:57:41 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:41Z|00196|binding|INFO|Setting lport 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 down in Southbound
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:41 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:41Z|00197|binding|INFO|Removing iface tap479fe4cd-8b ovn-installed in OVS
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:41.760 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:c7:00 10.100.0.9'], port_security=['fa:16:3e:de:c7:00 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8468f5dd-633a-4b6d-a205-ba75e8e070bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d21391a321476eb133317b3402b0f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '48328b99-2dfb-4da6-bd97-8cd4f810b350', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d98e64fb-092d-4777-b741-426f3e849bc3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=479fe4cd-8bd2-48dd-a7ca-e39f24e57b10) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:57:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:41.762 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 in datapath 359720eb-a957-4bcd-b9b2-3cf7dad947e4 unbound from our chassis#033[00m
Oct 10 23:57:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:41.763 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 359720eb-a957-4bcd-b9b2-3cf7dad947e4#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:41.788 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd0cd42-5c45-4fe8-bfb7-a6842701e360]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:41 np0005480824 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct 10 23:57:41 np0005480824 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 14.618s CPU time.
Oct 10 23:57:41 np0005480824 systemd-machined[215071]: Machine qemu-21-instance-00000015 terminated.
Oct 10 23:57:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:41.825 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[be50de7f-1954-4aa6-8c24-533d0537bd03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:41.829 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[e76fca6c-70bc-47fa-bbcf-7486a08af5db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:41 np0005480824 podman[292548]: 2025-10-11 03:57:41.839971671 +0000 UTC m=+0.313836731 container remove 2a91a0fdb197117cc347d586c8fdbda07f8673b3e74279c4f3b4b1e8368ee514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 10 23:57:41 np0005480824 systemd[1]: libpod-conmon-2a91a0fdb197117cc347d586c8fdbda07f8673b3e74279c4f3b4b1e8368ee514.scope: Deactivated successfully.
Oct 10 23:57:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:41.861 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d4e300-73ba-49a7-887d-874d5685a072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:41.882 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[81d25d65-f936-4b1d-8ec8-733505e8eb35]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap359720eb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:90:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444025, 'reachable_time': 32126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292576, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:41.903 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[15ee3133-d07f-480d-93cf-dbc456c4eeff]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap359720eb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444040, 'tstamp': 444040}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292581, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap359720eb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444044, 'tstamp': 444044}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292581, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:57:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:41.906 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359720eb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.907 2 INFO nova.virt.libvirt.driver [-] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Instance destroyed successfully.#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.909 2 DEBUG nova.objects.instance [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lazy-loading 'resources' on Instance uuid 8468f5dd-633a-4b6d-a205-ba75e8e070bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:41.912 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap359720eb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:57:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:41.913 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:57:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:41.913 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap359720eb-a0, col_values=(('external_ids', {'iface-id': '039c7668-0b85-4466-9c66-62531405028d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:57:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:41.913 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.922 2 DEBUG nova.virt.libvirt.vif [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:56:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-84294639',display_name='tempest-TestVolumeBootPattern-server-84294639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-84294639',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHKqCtFesGlIN9DGdSuPEGCilj3bKmCIyQ2Hx4tQRLuRoOqWjhIRAgPC71aK1tfMSZbOh/7KRfo7uhOOwgBdYVdW77mjMG+sfmvlDoQnrLmEWQMeSschoC2XBAsdgkOOQ==',key_name='tempest-TestVolumeBootPattern-1691748970',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:57:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-0t350jqn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:57:03Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=8468f5dd-633a-4b6d-a205-ba75e8e070bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "address": "fa:16:3e:de:c7:00", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479fe4cd-8b", "ovs_interfaceid": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.922 2 DEBUG nova.network.os_vif_util [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "address": "fa:16:3e:de:c7:00", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479fe4cd-8b", "ovs_interfaceid": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.923 2 DEBUG nova.network.os_vif_util [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:c7:00,bridge_name='br-int',has_traffic_filtering=True,id=479fe4cd-8bd2-48dd-a7ca-e39f24e57b10,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479fe4cd-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.923 2 DEBUG os_vif [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:c7:00,bridge_name='br-int',has_traffic_filtering=True,id=479fe4cd-8bd2-48dd-a7ca-e39f24e57b10,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479fe4cd-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap479fe4cd-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:41 np0005480824 nova_compute[260089]: 2025-10-11 03:57:41.933 2 INFO os_vif [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:c7:00,bridge_name='br-int',has_traffic_filtering=True,id=479fe4cd-8bd2-48dd-a7ca-e39f24e57b10,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479fe4cd-8b')#033[00m
Oct 10 23:57:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:57:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:57:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:57:42 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev c51e09ce-d38a-4e2f-ac1c-363eea4bbec8 does not exist
Oct 10 23:57:42 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev c08a46f3-1e85-4cb4-b9ab-77e378842673 does not exist
Oct 10 23:57:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Oct 10 23:57:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Oct 10 23:57:42 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Oct 10 23:57:42 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:57:42 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:57:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 187 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 712 KiB/s rd, 7.7 KiB/s wr, 156 op/s
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.186 2 INFO nova.virt.libvirt.driver [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Deleting instance files /var/lib/nova/instances/8468f5dd-633a-4b6d-a205-ba75e8e070bb_del#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.188 2 INFO nova.virt.libvirt.driver [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Deletion of /var/lib/nova/instances/8468f5dd-633a-4b6d-a205-ba75e8e070bb_del complete#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.234 2 INFO nova.compute.manager [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Took 1.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.234 2 DEBUG oslo.service.loopingcall [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.235 2 DEBUG nova.compute.manager [-] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.235 2 DEBUG nova.network.neutron [-] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:57:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3274839906' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3274839906' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.319 2 DEBUG nova.compute.manager [req-4e2b2cf5-80a4-47ef-98cb-678ac9408ce9 req-45bffb21-5e4a-4384-9dc5-6f886415968b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Received event network-vif-unplugged-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.319 2 DEBUG oslo_concurrency.lockutils [req-4e2b2cf5-80a4-47ef-98cb-678ac9408ce9 req-45bffb21-5e4a-4384-9dc5-6f886415968b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.319 2 DEBUG oslo_concurrency.lockutils [req-4e2b2cf5-80a4-47ef-98cb-678ac9408ce9 req-45bffb21-5e4a-4384-9dc5-6f886415968b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.320 2 DEBUG oslo_concurrency.lockutils [req-4e2b2cf5-80a4-47ef-98cb-678ac9408ce9 req-45bffb21-5e4a-4384-9dc5-6f886415968b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.320 2 DEBUG nova.compute.manager [req-4e2b2cf5-80a4-47ef-98cb-678ac9408ce9 req-45bffb21-5e4a-4384-9dc5-6f886415968b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] No waiting events found dispatching network-vif-unplugged-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.320 2 DEBUG nova.compute.manager [req-4e2b2cf5-80a4-47ef-98cb-678ac9408ce9 req-45bffb21-5e4a-4384-9dc5-6f886415968b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Received event network-vif-unplugged-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:57:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Oct 10 23:57:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Oct 10 23:57:43 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.834 2 DEBUG nova.network.neutron [-] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.854 2 INFO nova.compute.manager [-] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Took 0.62 seconds to deallocate network for instance.#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.879 2 DEBUG nova.compute.manager [req-efb8b5f2-a66c-496b-aa69-43f1bc44acae req-9aab8872-84a6-43f0-af29-3ffc8cf85dc2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Received event network-vif-deleted-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.960 2 DEBUG nova.network.neutron [req-38c252e9-692d-41bc-99b4-8c6f64a08bb4 req-3f8f78f9-f604-4f17-b3d8-a5ec8aa8de97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Updated VIF entry in instance network info cache for port 479fe4cd-8bd2-48dd-a7ca-e39f24e57b10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.960 2 DEBUG nova.network.neutron [req-38c252e9-692d-41bc-99b4-8c6f64a08bb4 req-3f8f78f9-f604-4f17-b3d8-a5ec8aa8de97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Updating instance_info_cache with network_info: [{"id": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "address": "fa:16:3e:de:c7:00", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479fe4cd-8b", "ovs_interfaceid": "479fe4cd-8bd2-48dd-a7ca-e39f24e57b10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:57:43 np0005480824 nova_compute[260089]: 2025-10-11 03:57:43.989 2 DEBUG oslo_concurrency.lockutils [req-38c252e9-692d-41bc-99b4-8c6f64a08bb4 req-3f8f78f9-f604-4f17-b3d8-a5ec8aa8de97 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-8468f5dd-633a-4b6d-a205-ba75e8e070bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:57:44 np0005480824 nova_compute[260089]: 2025-10-11 03:57:44.035 2 INFO nova.compute.manager [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Took 0.18 seconds to detach 1 volumes for instance.#033[00m
Oct 10 23:57:44 np0005480824 nova_compute[260089]: 2025-10-11 03:57:44.077 2 DEBUG oslo_concurrency.lockutils [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:44 np0005480824 nova_compute[260089]: 2025-10-11 03:57:44.077 2 DEBUG oslo_concurrency.lockutils [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:44 np0005480824 nova_compute[260089]: 2025-10-11 03:57:44.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:44 np0005480824 nova_compute[260089]: 2025-10-11 03:57:44.135 2 DEBUG oslo_concurrency.processutils [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:57:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:57:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2154144512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:57:44 np0005480824 nova_compute[260089]: 2025-10-11 03:57:44.574 2 DEBUG oslo_concurrency.processutils [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:57:44 np0005480824 nova_compute[260089]: 2025-10-11 03:57:44.580 2 DEBUG nova.compute.provider_tree [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:57:44 np0005480824 nova_compute[260089]: 2025-10-11 03:57:44.602 2 DEBUG nova.scheduler.client.report [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:57:44 np0005480824 nova_compute[260089]: 2025-10-11 03:57:44.632 2 DEBUG oslo_concurrency.lockutils [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:44 np0005480824 nova_compute[260089]: 2025-10-11 03:57:44.688 2 INFO nova.scheduler.client.report [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Deleted allocations for instance 8468f5dd-633a-4b6d-a205-ba75e8e070bb#033[00m
Oct 10 23:57:44 np0005480824 nova_compute[260089]: 2025-10-11 03:57:44.764 2 DEBUG oslo_concurrency.lockutils [None req-641a2d7c-c506-4f76-aa6d-5035c8ac33c9 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 187 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 605 KiB/s rd, 6.6 KiB/s wr, 133 op/s
Oct 10 23:57:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:57:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Oct 10 23:57:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Oct 10 23:57:45 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Oct 10 23:57:45 np0005480824 nova_compute[260089]: 2025-10-11 03:57:45.497 2 DEBUG nova.compute.manager [req-edabf6e4-a445-4bb6-92b8-165c7b33ceb9 req-b433ce09-d646-4717-9c1e-55acca2e0b7f 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Received event network-vif-plugged-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:57:45 np0005480824 nova_compute[260089]: 2025-10-11 03:57:45.497 2 DEBUG oslo_concurrency.lockutils [req-edabf6e4-a445-4bb6-92b8-165c7b33ceb9 req-b433ce09-d646-4717-9c1e-55acca2e0b7f 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:45 np0005480824 nova_compute[260089]: 2025-10-11 03:57:45.498 2 DEBUG oslo_concurrency.lockutils [req-edabf6e4-a445-4bb6-92b8-165c7b33ceb9 req-b433ce09-d646-4717-9c1e-55acca2e0b7f 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:45 np0005480824 nova_compute[260089]: 2025-10-11 03:57:45.498 2 DEBUG oslo_concurrency.lockutils [req-edabf6e4-a445-4bb6-92b8-165c7b33ceb9 req-b433ce09-d646-4717-9c1e-55acca2e0b7f 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "8468f5dd-633a-4b6d-a205-ba75e8e070bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:45 np0005480824 nova_compute[260089]: 2025-10-11 03:57:45.499 2 DEBUG nova.compute.manager [req-edabf6e4-a445-4bb6-92b8-165c7b33ceb9 req-b433ce09-d646-4717-9c1e-55acca2e0b7f 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] No waiting events found dispatching network-vif-plugged-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:57:45 np0005480824 nova_compute[260089]: 2025-10-11 03:57:45.499 2 WARNING nova.compute.manager [req-edabf6e4-a445-4bb6-92b8-165c7b33ceb9 req-b433ce09-d646-4717-9c1e-55acca2e0b7f 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Received unexpected event network-vif-plugged-479fe4cd-8bd2-48dd-a7ca-e39f24e57b10 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 23:57:45 np0005480824 podman[292683]: 2025-10-11 03:57:45.997719346 +0000 UTC m=+0.051985420 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 23:57:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Oct 10 23:57:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Oct 10 23:57:46 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Oct 10 23:57:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 187 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 427 KiB/s rd, 5.6 KiB/s wr, 134 op/s
Oct 10 23:57:46 np0005480824 nova_compute[260089]: 2025-10-11 03:57:46.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2703289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2703289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:57:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2078655404' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:57:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:57:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2078655404' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:57:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 187 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 3.2 KiB/s wr, 103 op/s
Oct 10 23:57:49 np0005480824 nova_compute[260089]: 2025-10-11 03:57:49.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:57:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Oct 10 23:57:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Oct 10 23:57:50 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Oct 10 23:57:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 187 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 3.2 KiB/s wr, 103 op/s
Oct 10 23:57:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Oct 10 23:57:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Oct 10 23:57:51 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Oct 10 23:57:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:57:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2793533438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:57:51 np0005480824 nova_compute[260089]: 2025-10-11 03:57:51.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:52.520 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:57:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:52.521 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:57:52 np0005480824 nova_compute[260089]: 2025-10-11 03:57:52.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:52 np0005480824 nova_compute[260089]: 2025-10-11 03:57:52.835 2 DEBUG nova.compute.manager [req-5205599b-54b6-444c-8ab4-3459c0461b2e req-47ed3a06-8f2d-4098-8fca-06a828f81526 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Received event network-changed-3d1404de-38bf-4d1c-960e-bcc14817fcc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:57:52 np0005480824 nova_compute[260089]: 2025-10-11 03:57:52.835 2 DEBUG nova.compute.manager [req-5205599b-54b6-444c-8ab4-3459c0461b2e req-47ed3a06-8f2d-4098-8fca-06a828f81526 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Refreshing instance network info cache due to event network-changed-3d1404de-38bf-4d1c-960e-bcc14817fcc6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:57:52 np0005480824 nova_compute[260089]: 2025-10-11 03:57:52.836 2 DEBUG oslo_concurrency.lockutils [req-5205599b-54b6-444c-8ab4-3459c0461b2e req-47ed3a06-8f2d-4098-8fca-06a828f81526 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:57:52 np0005480824 nova_compute[260089]: 2025-10-11 03:57:52.836 2 DEBUG oslo_concurrency.lockutils [req-5205599b-54b6-444c-8ab4-3459c0461b2e req-47ed3a06-8f2d-4098-8fca-06a828f81526 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:57:52 np0005480824 nova_compute[260089]: 2025-10-11 03:57:52.837 2 DEBUG nova.network.neutron [req-5205599b-54b6-444c-8ab4-3459c0461b2e req-47ed3a06-8f2d-4098-8fca-06a828f81526 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Refreshing network info cache for port 3d1404de-38bf-4d1c-960e-bcc14817fcc6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:57:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 169 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 7.3 KiB/s wr, 157 op/s
Oct 10 23:57:52 np0005480824 nova_compute[260089]: 2025-10-11 03:57:52.926 2 DEBUG oslo_concurrency.lockutils [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "3ccdaa3b-882a-432f-b619-002ded45ac60" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:52 np0005480824 nova_compute[260089]: 2025-10-11 03:57:52.927 2 DEBUG oslo_concurrency.lockutils [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:52 np0005480824 nova_compute[260089]: 2025-10-11 03:57:52.927 2 DEBUG oslo_concurrency.lockutils [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:52 np0005480824 nova_compute[260089]: 2025-10-11 03:57:52.928 2 DEBUG oslo_concurrency.lockutils [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:52 np0005480824 nova_compute[260089]: 2025-10-11 03:57:52.928 2 DEBUG oslo_concurrency.lockutils [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:52 np0005480824 nova_compute[260089]: 2025-10-11 03:57:52.930 2 INFO nova.compute.manager [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Terminating instance#033[00m
Oct 10 23:57:52 np0005480824 nova_compute[260089]: 2025-10-11 03:57:52.932 2 DEBUG nova.compute.manager [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 23:57:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Oct 10 23:57:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Oct 10 23:57:53 np0005480824 kernel: tap3d1404de-38 (unregistering): left promiscuous mode
Oct 10 23:57:53 np0005480824 NetworkManager[44969]: <info>  [1760155073.7804] device (tap3d1404de-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 23:57:53 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:53Z|00198|binding|INFO|Releasing lport 3d1404de-38bf-4d1c-960e-bcc14817fcc6 from this chassis (sb_readonly=0)
Oct 10 23:57:53 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:53Z|00199|binding|INFO|Setting lport 3d1404de-38bf-4d1c-960e-bcc14817fcc6 down in Southbound
Oct 10 23:57:53 np0005480824 ovn_controller[152667]: 2025-10-11T03:57:53Z|00200|binding|INFO|Removing iface tap3d1404de-38 ovn-installed in OVS
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:53.798 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:58:4c 10.100.0.12'], port_security=['fa:16:3e:0f:58:4c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3ccdaa3b-882a-432f-b619-002ded45ac60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d21391a321476eb133317b3402b0f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '48328b99-2dfb-4da6-bd97-8cd4f810b350', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d98e64fb-092d-4777-b741-426f3e849bc3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=3d1404de-38bf-4d1c-960e-bcc14817fcc6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:57:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:53.800 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 3d1404de-38bf-4d1c-960e-bcc14817fcc6 in datapath 359720eb-a957-4bcd-b9b2-3cf7dad947e4 unbound from our chassis#033[00m
Oct 10 23:57:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:53.801 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 359720eb-a957-4bcd-b9b2-3cf7dad947e4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 23:57:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:53.802 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[72e1ef6d-4622-40a9-b57c-feecb856afa9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:53 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:53.803 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 namespace which is not needed anymore#033[00m
Oct 10 23:57:53 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:53 np0005480824 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct 10 23:57:53 np0005480824 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Consumed 16.855s CPU time.
Oct 10 23:57:53 np0005480824 systemd-machined[215071]: Machine qemu-20-instance-00000014 terminated.
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.868 2 DEBUG nova.network.neutron [req-5205599b-54b6-444c-8ab4-3459c0461b2e req-47ed3a06-8f2d-4098-8fca-06a828f81526 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Updated VIF entry in instance network info cache for port 3d1404de-38bf-4d1c-960e-bcc14817fcc6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.869 2 DEBUG nova.network.neutron [req-5205599b-54b6-444c-8ab4-3459c0461b2e req-47ed3a06-8f2d-4098-8fca-06a828f81526 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Updating instance_info_cache with network_info: [{"id": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "address": "fa:16:3e:0f:58:4c", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d1404de-38", "ovs_interfaceid": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.890 2 DEBUG oslo_concurrency.lockutils [req-5205599b-54b6-444c-8ab4-3459c0461b2e req-47ed3a06-8f2d-4098-8fca-06a828f81526 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-3ccdaa3b-882a-432f-b619-002ded45ac60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.975 2 INFO nova.virt.libvirt.driver [-] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Instance destroyed successfully.#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.975 2 DEBUG nova.objects.instance [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lazy-loading 'resources' on Instance uuid 3ccdaa3b-882a-432f-b619-002ded45ac60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.990 2 DEBUG nova.compute.manager [req-40597371-5038-4c23-a1fe-f8b641d23508 req-1f49b15c-ec74-4c8c-8ae5-cf4133b4c852 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Received event network-vif-unplugged-3d1404de-38bf-4d1c-960e-bcc14817fcc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.991 2 DEBUG oslo_concurrency.lockutils [req-40597371-5038-4c23-a1fe-f8b641d23508 req-1f49b15c-ec74-4c8c-8ae5-cf4133b4c852 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.991 2 DEBUG oslo_concurrency.lockutils [req-40597371-5038-4c23-a1fe-f8b641d23508 req-1f49b15c-ec74-4c8c-8ae5-cf4133b4c852 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.991 2 DEBUG oslo_concurrency.lockutils [req-40597371-5038-4c23-a1fe-f8b641d23508 req-1f49b15c-ec74-4c8c-8ae5-cf4133b4c852 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.992 2 DEBUG nova.compute.manager [req-40597371-5038-4c23-a1fe-f8b641d23508 req-1f49b15c-ec74-4c8c-8ae5-cf4133b4c852 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] No waiting events found dispatching network-vif-unplugged-3d1404de-38bf-4d1c-960e-bcc14817fcc6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.992 2 DEBUG nova.compute.manager [req-40597371-5038-4c23-a1fe-f8b641d23508 req-1f49b15c-ec74-4c8c-8ae5-cf4133b4c852 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Received event network-vif-unplugged-3d1404de-38bf-4d1c-960e-bcc14817fcc6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.997 2 DEBUG nova.virt.libvirt.vif [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1815523405',display_name='tempest-TestVolumeBootPattern-server-1815523405',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1815523405',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHKqCtFesGlIN9DGdSuPEGCilj3bKmCIyQ2Hx4tQRLuRoOqWjhIRAgPC71aK1tfMSZbOh/7KRfo7uhOOwgBdYVdW77mjMG+sfmvlDoQnrLmEWQMeSschoC2XBAsdgkOOQ==',key_name='tempest-TestVolumeBootPattern-1691748970',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:56:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='55d21391a321476eb133317b3402b0f0',ramdisk_id='',reservation_id='r-8d8xh3fp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-739984652',owner_user_name='tempest-TestVolumeBootPattern-739984652-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:56:13Z,user_data=None,user_id='38ebc503771e417aaf1f3aea0c835994',uuid=3ccdaa3b-882a-432f-b619-002ded45ac60,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "address": "fa:16:3e:0f:58:4c", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d1404de-38", "ovs_interfaceid": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.997 2 DEBUG nova.network.os_vif_util [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converting VIF {"id": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "address": "fa:16:3e:0f:58:4c", "network": {"id": "359720eb-a957-4bcd-b9b2-3cf7dad947e4", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-146358617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d21391a321476eb133317b3402b0f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d1404de-38", "ovs_interfaceid": "3d1404de-38bf-4d1c-960e-bcc14817fcc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.998 2 DEBUG nova.network.os_vif_util [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:58:4c,bridge_name='br-int',has_traffic_filtering=True,id=3d1404de-38bf-4d1c-960e-bcc14817fcc6,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d1404de-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.998 2 DEBUG os_vif [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:58:4c,bridge_name='br-int',has_traffic_filtering=True,id=3d1404de-38bf-4d1c-960e-bcc14817fcc6,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d1404de-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 23:57:53 np0005480824 nova_compute[260089]: 2025-10-11 03:57:53.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:54 np0005480824 nova_compute[260089]: 2025-10-11 03:57:54.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d1404de-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:57:54 np0005480824 nova_compute[260089]: 2025-10-11 03:57:54.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:54 np0005480824 nova_compute[260089]: 2025-10-11 03:57:54.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:54 np0005480824 nova_compute[260089]: 2025-10-11 03:57:54.004 2 INFO os_vif [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:58:4c,bridge_name='br-int',has_traffic_filtering=True,id=3d1404de-38bf-4d1c-960e-bcc14817fcc6,network=Network(359720eb-a957-4bcd-b9b2-3cf7dad947e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d1404de-38')#033[00m
Oct 10 23:57:54 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[290156]: [NOTICE]   (290166) : haproxy version is 2.8.14-c23fe91
Oct 10 23:57:54 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[290156]: [NOTICE]   (290166) : path to executable is /usr/sbin/haproxy
Oct 10 23:57:54 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[290156]: [WARNING]  (290166) : Exiting Master process...
Oct 10 23:57:54 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[290156]: [ALERT]    (290166) : Current worker (290169) exited with code 143 (Terminated)
Oct 10 23:57:54 np0005480824 neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4[290156]: [WARNING]  (290166) : All workers exited. Exiting... (0)
Oct 10 23:57:54 np0005480824 systemd[1]: libpod-853ef9828c2641034510aec0a7208fe69a860be7b001e804bc88f476436d7060.scope: Deactivated successfully.
Oct 10 23:57:54 np0005480824 podman[292731]: 2025-10-11 03:57:54.109021896 +0000 UTC m=+0.210376015 container died 853ef9828c2641034510aec0a7208fe69a860be7b001e804bc88f476436d7060 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 10 23:57:54 np0005480824 nova_compute[260089]: 2025-10-11 03:57:54.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:54 np0005480824 systemd[1]: var-lib-containers-storage-overlay-b41495cd5fd5dfe6fbbf484bdb1ef33aa2f791b391903b330db1487ba73ba735-merged.mount: Deactivated successfully.
Oct 10 23:57:54 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-853ef9828c2641034510aec0a7208fe69a860be7b001e804bc88f476436d7060-userdata-shm.mount: Deactivated successfully.
Oct 10 23:57:54 np0005480824 podman[292731]: 2025-10-11 03:57:54.234460353 +0000 UTC m=+0.335814472 container cleanup 853ef9828c2641034510aec0a7208fe69a860be7b001e804bc88f476436d7060 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 10 23:57:54 np0005480824 systemd[1]: libpod-conmon-853ef9828c2641034510aec0a7208fe69a860be7b001e804bc88f476436d7060.scope: Deactivated successfully.
Oct 10 23:57:54 np0005480824 podman[292792]: 2025-10-11 03:57:54.4690954 +0000 UTC m=+0.208376708 container remove 853ef9828c2641034510aec0a7208fe69a860be7b001e804bc88f476436d7060 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 10 23:57:54 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:54.479 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[432ce4bc-79a9-4031-9b17-b7ae4ecb6d38]: (4, ('Sat Oct 11 03:57:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 (853ef9828c2641034510aec0a7208fe69a860be7b001e804bc88f476436d7060)\n853ef9828c2641034510aec0a7208fe69a860be7b001e804bc88f476436d7060\nSat Oct 11 03:57:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 (853ef9828c2641034510aec0a7208fe69a860be7b001e804bc88f476436d7060)\n853ef9828c2641034510aec0a7208fe69a860be7b001e804bc88f476436d7060\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:54 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:54.482 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[214c02c4-6c4a-4410-9f94-4fd7b89d9c7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:54 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:54.484 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359720eb-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:57:54 np0005480824 kernel: tap359720eb-a0: left promiscuous mode
Oct 10 23:57:54 np0005480824 nova_compute[260089]: 2025-10-11 03:57:54.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:54 np0005480824 nova_compute[260089]: 2025-10-11 03:57:54.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:54 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:54.495 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[96967a0d-5d2d-4d48-bc15-990e9d51d766]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:54 np0005480824 nova_compute[260089]: 2025-10-11 03:57:54.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:54 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:54.525 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7f600fd0-8644-4a83-8db3-65cb833375e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:54 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:54.528 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b1738435-a29d-46ed-8cd5-cc5b56a59099]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:54 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:54.552 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa085c5-4be9-4d58-9519-bcb0b72995ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444018, 'reachable_time': 17131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292807, 'error': None, 'target': 'ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:54 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:54.556 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-359720eb-a957-4bcd-b9b2-3cf7dad947e4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 23:57:54 np0005480824 systemd[1]: run-netns-ovnmeta\x2d359720eb\x2da957\x2d4bcd\x2db9b2\x2d3cf7dad947e4.mount: Deactivated successfully.
Oct 10 23:57:54 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:57:54.557 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef22470-a19c-4d70-9dc9-1cfcdae96495]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:57:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Oct 10 23:57:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 169 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 4.8 KiB/s wr, 69 op/s
Oct 10 23:57:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Oct 10 23:57:54 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Oct 10 23:57:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:57:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Oct 10 23:57:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Oct 10 23:57:55 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Oct 10 23:57:55 np0005480824 nova_compute[260089]: 2025-10-11 03:57:55.604 2 INFO nova.virt.libvirt.driver [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Deleting instance files /var/lib/nova/instances/3ccdaa3b-882a-432f-b619-002ded45ac60_del#033[00m
Oct 10 23:57:55 np0005480824 nova_compute[260089]: 2025-10-11 03:57:55.605 2 INFO nova.virt.libvirt.driver [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Deletion of /var/lib/nova/instances/3ccdaa3b-882a-432f-b619-002ded45ac60_del complete#033[00m
Oct 10 23:57:55 np0005480824 nova_compute[260089]: 2025-10-11 03:57:55.668 2 INFO nova.compute.manager [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Took 2.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 23:57:55 np0005480824 nova_compute[260089]: 2025-10-11 03:57:55.669 2 DEBUG oslo.service.loopingcall [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 23:57:55 np0005480824 nova_compute[260089]: 2025-10-11 03:57:55.670 2 DEBUG nova.compute.manager [-] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 23:57:55 np0005480824 nova_compute[260089]: 2025-10-11 03:57:55.670 2 DEBUG nova.network.neutron [-] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 23:57:56 np0005480824 nova_compute[260089]: 2025-10-11 03:57:56.070 2 DEBUG nova.compute.manager [req-8828eb0a-9595-427b-8477-c5edf7bb8e93 req-55a9d64f-9d8b-4e85-b3aa-3581433a88fd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Received event network-vif-plugged-3d1404de-38bf-4d1c-960e-bcc14817fcc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:57:56 np0005480824 nova_compute[260089]: 2025-10-11 03:57:56.070 2 DEBUG oslo_concurrency.lockutils [req-8828eb0a-9595-427b-8477-c5edf7bb8e93 req-55a9d64f-9d8b-4e85-b3aa-3581433a88fd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:56 np0005480824 nova_compute[260089]: 2025-10-11 03:57:56.071 2 DEBUG oslo_concurrency.lockutils [req-8828eb0a-9595-427b-8477-c5edf7bb8e93 req-55a9d64f-9d8b-4e85-b3aa-3581433a88fd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:56 np0005480824 nova_compute[260089]: 2025-10-11 03:57:56.071 2 DEBUG oslo_concurrency.lockutils [req-8828eb0a-9595-427b-8477-c5edf7bb8e93 req-55a9d64f-9d8b-4e85-b3aa-3581433a88fd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:56 np0005480824 nova_compute[260089]: 2025-10-11 03:57:56.072 2 DEBUG nova.compute.manager [req-8828eb0a-9595-427b-8477-c5edf7bb8e93 req-55a9d64f-9d8b-4e85-b3aa-3581433a88fd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] No waiting events found dispatching network-vif-plugged-3d1404de-38bf-4d1c-960e-bcc14817fcc6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:57:56 np0005480824 nova_compute[260089]: 2025-10-11 03:57:56.072 2 WARNING nova.compute.manager [req-8828eb0a-9595-427b-8477-c5edf7bb8e93 req-55a9d64f-9d8b-4e85-b3aa-3581433a88fd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Received unexpected event network-vif-plugged-3d1404de-38bf-4d1c-960e-bcc14817fcc6 for instance with vm_state active and task_state deleting.#033[00m
Oct 10 23:57:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Oct 10 23:57:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 169 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 6.6 KiB/s wr, 119 op/s
Oct 10 23:57:56 np0005480824 nova_compute[260089]: 2025-10-11 03:57:56.903 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155061.9005487, 8468f5dd-633a-4b6d-a205-ba75e8e070bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:57:56 np0005480824 nova_compute[260089]: 2025-10-11 03:57:56.904 2 INFO nova.compute.manager [-] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:57:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Oct 10 23:57:56 np0005480824 nova_compute[260089]: 2025-10-11 03:57:56.930 2 DEBUG nova.compute.manager [None req-d5aa09a0-e582-4ef5-a520-b343b1a32d18 - - - - - -] [instance: 8468f5dd-633a-4b6d-a205-ba75e8e070bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:57:56 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Oct 10 23:57:57 np0005480824 nova_compute[260089]: 2025-10-11 03:57:57.325 2 DEBUG nova.network.neutron [-] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:57:57 np0005480824 nova_compute[260089]: 2025-10-11 03:57:57.343 2 INFO nova.compute.manager [-] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Took 1.67 seconds to deallocate network for instance.#033[00m
Oct 10 23:57:57 np0005480824 nova_compute[260089]: 2025-10-11 03:57:57.389 2 DEBUG nova.compute.manager [req-e3ca367e-9ae0-4f8b-b456-aa9a682cecf5 req-0b6cfa11-d236-4e6f-9c81-f83fd65d0814 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Received event network-vif-deleted-3d1404de-38bf-4d1c-960e-bcc14817fcc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:57:57 np0005480824 nova_compute[260089]: 2025-10-11 03:57:57.503 2 INFO nova.compute.manager [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Took 0.16 seconds to detach 1 volumes for instance.#033[00m
Oct 10 23:57:57 np0005480824 nova_compute[260089]: 2025-10-11 03:57:57.565 2 DEBUG oslo_concurrency.lockutils [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:57:57 np0005480824 nova_compute[260089]: 2025-10-11 03:57:57.566 2 DEBUG oslo_concurrency.lockutils [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:57:57 np0005480824 nova_compute[260089]: 2025-10-11 03:57:57.618 2 DEBUG oslo_concurrency.processutils [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:57:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:57:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:57:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:57:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:57:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:57:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:57:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Oct 10 23:57:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:57:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1529136008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:57:58 np0005480824 nova_compute[260089]: 2025-10-11 03:57:58.232 2 DEBUG oslo_concurrency.processutils [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:57:58 np0005480824 nova_compute[260089]: 2025-10-11 03:57:58.239 2 DEBUG nova.compute.provider_tree [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:57:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Oct 10 23:57:58 np0005480824 nova_compute[260089]: 2025-10-11 03:57:58.258 2 DEBUG nova.scheduler.client.report [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:57:58 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Oct 10 23:57:58 np0005480824 nova_compute[260089]: 2025-10-11 03:57:58.281 2 DEBUG oslo_concurrency.lockutils [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:58 np0005480824 nova_compute[260089]: 2025-10-11 03:57:58.310 2 INFO nova.scheduler.client.report [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Deleted allocations for instance 3ccdaa3b-882a-432f-b619-002ded45ac60#033[00m
Oct 10 23:57:58 np0005480824 nova_compute[260089]: 2025-10-11 03:57:58.390 2 DEBUG oslo_concurrency.lockutils [None req-4be191ce-55ba-4388-96f7-709c8ff0e52d 38ebc503771e417aaf1f3aea0c835994 55d21391a321476eb133317b3402b0f0 - - default default] Lock "3ccdaa3b-882a-432f-b619-002ded45ac60" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:57:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 169 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 6.0 KiB/s wr, 130 op/s
Oct 10 23:57:59 np0005480824 nova_compute[260089]: 2025-10-11 03:57:59.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:57:59 np0005480824 nova_compute[260089]: 2025-10-11 03:57:59.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:58:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 169 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.0 KiB/s wr, 88 op/s
Oct 10 23:58:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:58:01.523 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:58:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 169 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 4.3 KiB/s wr, 116 op/s
Oct 10 23:58:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Oct 10 23:58:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Oct 10 23:58:03 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Oct 10 23:58:04 np0005480824 nova_compute[260089]: 2025-10-11 03:58:04.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:04 np0005480824 podman[292838]: 2025-10-11 03:58:04.039758937 +0000 UTC m=+0.084809187 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:58:04 np0005480824 podman[292837]: 2025-10-11 03:58:04.053651125 +0000 UTC m=+0.097164649 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 23:58:04 np0005480824 nova_compute[260089]: 2025-10-11 03:58:04.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 169 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.2 KiB/s wr, 80 op/s
Oct 10 23:58:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:58:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Oct 10 23:58:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Oct 10 23:58:05 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Oct 10 23:58:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3540785280' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3540785280' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Oct 10 23:58:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Oct 10 23:58:06 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Oct 10 23:58:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 134 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 3.3 KiB/s wr, 117 op/s
Oct 10 23:58:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3297647687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3297647687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 88 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 3.2 KiB/s wr, 100 op/s
Oct 10 23:58:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Oct 10 23:58:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Oct 10 23:58:08 np0005480824 nova_compute[260089]: 2025-10-11 03:58:08.973 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155073.9721184, 3ccdaa3b-882a-432f-b619-002ded45ac60 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:58:08 np0005480824 nova_compute[260089]: 2025-10-11 03:58:08.974 2 INFO nova.compute.manager [-] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] VM Stopped (Lifecycle Event)#033[00m
Oct 10 23:58:08 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Oct 10 23:58:08 np0005480824 nova_compute[260089]: 2025-10-11 03:58:08.991 2 DEBUG nova.compute.manager [None req-ef7b9227-0394-49f0-9d40-bd294b31c43d - - - - - -] [instance: 3ccdaa3b-882a-432f-b619-002ded45ac60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:58:09 np0005480824 nova_compute[260089]: 2025-10-11 03:58:09.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:09 np0005480824 nova_compute[260089]: 2025-10-11 03:58:09.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:10 np0005480824 nova_compute[260089]: 2025-10-11 03:58:10.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:10 np0005480824 nova_compute[260089]: 2025-10-11 03:58:10.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:58:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Oct 10 23:58:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:58:10.503 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:58:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:58:10.503 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:58:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:58:10.503 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:58:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Oct 10 23:58:10 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Oct 10 23:58:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 88 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 3.4 KiB/s wr, 108 op/s
Oct 10 23:58:11 np0005480824 podman[292876]: 2025-10-11 03:58:11.035648366 +0000 UTC m=+0.097614469 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 23:58:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:12 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1177898056' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:12 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1177898056' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 88 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 4.5 KiB/s wr, 127 op/s
Oct 10 23:58:14 np0005480824 nova_compute[260089]: 2025-10-11 03:58:14.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:14 np0005480824 nova_compute[260089]: 2025-10-11 03:58:14.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 88 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.2 KiB/s wr, 61 op/s
Oct 10 23:58:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:58:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Oct 10 23:58:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Oct 10 23:58:15 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Oct 10 23:58:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 88 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.3 KiB/s wr, 29 op/s
Oct 10 23:58:17 np0005480824 podman[292902]: 2025-10-11 03:58:17.009886222 +0000 UTC m=+0.066255537 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 10 23:58:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 88 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 10 23:58:19 np0005480824 nova_compute[260089]: 2025-10-11 03:58:19.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:19 np0005480824 nova_compute[260089]: 2025-10-11 03:58:19.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Oct 10 23:58:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Oct 10 23:58:20 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Oct 10 23:58:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:58:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 88 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 1 op/s
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.341949) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155101342002, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2790, "num_deletes": 542, "total_data_size": 3482132, "memory_usage": 3539648, "flush_reason": "Manual Compaction"}
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155101375892, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3419797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28906, "largest_seqno": 31695, "table_properties": {"data_size": 3407153, "index_size": 7922, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3653, "raw_key_size": 30298, "raw_average_key_size": 21, "raw_value_size": 3379697, "raw_average_value_size": 2350, "num_data_blocks": 341, "num_entries": 1438, "num_filter_entries": 1438, "num_deletions": 542, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760154953, "oldest_key_time": 1760154953, "file_creation_time": 1760155101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 33993 microseconds, and 14932 cpu microseconds.
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.375946) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3419797 bytes OK
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.375975) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.378965) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.378989) EVENT_LOG_v1 {"time_micros": 1760155101378982, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.379011) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3468924, prev total WAL file size 3468924, number of live WAL files 2.
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.380847) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3339KB)], [62(8690KB)]
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155101380899, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12318740, "oldest_snapshot_seqno": -1}
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6045 keys, 10464762 bytes, temperature: kUnknown
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155101496237, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10464762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10417669, "index_size": 30855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 152486, "raw_average_key_size": 25, "raw_value_size": 10302205, "raw_average_value_size": 1704, "num_data_blocks": 1244, "num_entries": 6045, "num_filter_entries": 6045, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760155101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.496673) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10464762 bytes
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.500554) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.7 rd, 90.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.5 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7116, records dropped: 1071 output_compression: NoCompression
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.500585) EVENT_LOG_v1 {"time_micros": 1760155101500571, "job": 34, "event": "compaction_finished", "compaction_time_micros": 115482, "compaction_time_cpu_micros": 45694, "output_level": 6, "num_output_files": 1, "total_output_size": 10464762, "num_input_records": 7116, "num_output_records": 6045, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155101501952, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155101505256, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.380695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.505304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.505309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.505311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.505313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:58:21 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-03:58:21.505315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 23:58:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 88 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.3 KiB/s wr, 31 op/s
Oct 10 23:58:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Oct 10 23:58:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Oct 10 23:58:23 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Oct 10 23:58:24 np0005480824 nova_compute[260089]: 2025-10-11 03:58:24.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:24 np0005480824 nova_compute[260089]: 2025-10-11 03:58:24.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1749590577' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1749590577' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1799140547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1799140547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 88 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 KiB/s wr, 37 op/s
Oct 10 23:58:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:58:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4240958928' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4240958928' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/307964637' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/307964637' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 88 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 5.8 KiB/s wr, 81 op/s
Oct 10 23:58:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1457719796' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1457719796' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:58:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:58:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:58:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:58:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:58:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:58:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:58:27
Oct 10 23:58:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:58:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:58:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.data', 'backups', 'images', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 10 23:58:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:58:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:58:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:58:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:58:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:58:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:58:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:58:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:58:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:58:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:58:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:58:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Oct 10 23:58:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Oct 10 23:58:28 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Oct 10 23:58:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 88 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 7.3 KiB/s wr, 113 op/s
Oct 10 23:58:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1685681313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1685681313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:29 np0005480824 nova_compute[260089]: 2025-10-11 03:58:29.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:29 np0005480824 nova_compute[260089]: 2025-10-11 03:58:29.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Oct 10 23:58:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Oct 10 23:58:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Oct 10 23:58:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:58:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Oct 10 23:58:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Oct 10 23:58:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Oct 10 23:58:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 88 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 6.3 KiB/s wr, 105 op/s
Oct 10 23:58:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1564327660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:30 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1564327660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2077921592' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2077921592' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Oct 10 23:58:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Oct 10 23:58:32 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Oct 10 23:58:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 147 KiB/s rd, 5.6 KiB/s wr, 198 op/s
Oct 10 23:58:34 np0005480824 nova_compute[260089]: 2025-10-11 03:58:34.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:34 np0005480824 nova_compute[260089]: 2025-10-11 03:58:34.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:34 np0005480824 nova_compute[260089]: 2025-10-11 03:58:34.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:58:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Oct 10 23:58:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Oct 10 23:58:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Oct 10 23:58:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 5.6 KiB/s wr, 198 op/s
Oct 10 23:58:35 np0005480824 podman[292924]: 2025-10-11 03:58:35.010605721 +0000 UTC m=+0.060368630 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct 10 23:58:35 np0005480824 podman[292925]: 2025-10-11 03:58:35.03577265 +0000 UTC m=+0.071596218 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:58:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3703025852' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3703025852' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:35 np0005480824 nova_compute[260089]: 2025-10-11 03:58:35.292 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:58:35 np0005480824 nova_compute[260089]: 2025-10-11 03:58:35.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:58:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:58:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Oct 10 23:58:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Oct 10 23:58:35 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Oct 10 23:58:36 np0005480824 nova_compute[260089]: 2025-10-11 03:58:36.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:58:36 np0005480824 nova_compute[260089]: 2025-10-11 03:58:36.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:58:36 np0005480824 nova_compute[260089]: 2025-10-11 03:58:36.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:58:36 np0005480824 nova_compute[260089]: 2025-10-11 03:58:36.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:58:36 np0005480824 nova_compute[260089]: 2025-10-11 03:58:36.336 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:58:36 np0005480824 nova_compute[260089]: 2025-10-11 03:58:36.336 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:58:36 np0005480824 nova_compute[260089]: 2025-10-11 03:58:36.337 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:58:36 np0005480824 nova_compute[260089]: 2025-10-11 03:58:36.337 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:58:36 np0005480824 nova_compute[260089]: 2025-10-11 03:58:36.337 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:58:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Oct 10 23:58:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Oct 10 23:58:36 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Oct 10 23:58:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:58:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2523952441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:58:36 np0005480824 nova_compute[260089]: 2025-10-11 03:58:36.773 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:58:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 53 op/s
Oct 10 23:58:37 np0005480824 nova_compute[260089]: 2025-10-11 03:58:37.013 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:58:37 np0005480824 nova_compute[260089]: 2025-10-11 03:58:37.016 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4482MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:58:37 np0005480824 nova_compute[260089]: 2025-10-11 03:58:37.016 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:58:37 np0005480824 nova_compute[260089]: 2025-10-11 03:58:37.017 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:58:37 np0005480824 nova_compute[260089]: 2025-10-11 03:58:37.089 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:58:37 np0005480824 nova_compute[260089]: 2025-10-11 03:58:37.089 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:58:37 np0005480824 nova_compute[260089]: 2025-10-11 03:58:37.106 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:58:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:58:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3669409801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:58:37 np0005480824 nova_compute[260089]: 2025-10-11 03:58:37.565 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:58:37 np0005480824 nova_compute[260089]: 2025-10-11 03:58:37.571 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:58:37 np0005480824 nova_compute[260089]: 2025-10-11 03:58:37.587 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:58:37 np0005480824 nova_compute[260089]: 2025-10-11 03:58:37.608 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:58:37 np0005480824 nova_compute[260089]: 2025-10-11 03:58:37.609 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:58:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1012224637' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1012224637' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034720526470013676 of space, bias 1.0, pg target 0.10416157941004103 quantized to 32 (current 32)
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 10 23:58:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 23:58:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6965 writes, 31K keys, 6965 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6965 writes, 6965 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2169 writes, 10K keys, 2169 commit groups, 1.0 writes per commit group, ingest: 12.55 MB, 0.02 MB/s#012Interval WAL: 2169 writes, 2169 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    117.5      0.31              0.14        17    0.018       0      0       0.0       0.0#012  L6      1/0    9.98 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    157.9    129.9      0.96              0.51        16    0.060     79K   9525       0.0       0.0#012 Sum      1/0    9.98 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   4.4    119.0    126.9      1.28              0.65        33    0.039     79K   9525       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.7    113.1    118.8      0.51              0.25        10    0.051     31K   3730       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    157.9    129.9      0.96              0.51        16    0.060     79K   9525       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    119.0      0.31              0.14        16    0.019       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.036, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.07 MB/s write, 0.15 GB read, 0.06 MB/s read, 1.3 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5617dbc851f0#2 capacity: 304.00 MB usage: 17.21 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000162 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1162,16.55 MB,5.44397%) FilterBlock(34,231.61 KB,0.0744017%) IndexBlock(34,447.42 KB,0.143729%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 10 23:58:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.3 KiB/s wr, 69 op/s
Oct 10 23:58:39 np0005480824 nova_compute[260089]: 2025-10-11 03:58:39.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:39 np0005480824 nova_compute[260089]: 2025-10-11 03:58:39.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:58:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Oct 10 23:58:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Oct 10 23:58:40 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Oct 10 23:58:40 np0005480824 nova_compute[260089]: 2025-10-11 03:58:40.609 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:58:40 np0005480824 nova_compute[260089]: 2025-10-11 03:58:40.610 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:58:40 np0005480824 nova_compute[260089]: 2025-10-11 03:58:40.644 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 23:58:40 np0005480824 nova_compute[260089]: 2025-10-11 03:58:40.644 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:58:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.3 KiB/s wr, 69 op/s
Oct 10 23:58:42 np0005480824 podman[293008]: 2025-10-11 03:58:42.080786436 +0000 UTC m=+0.139717825 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 10 23:58:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Oct 10 23:58:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Oct 10 23:58:42 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Oct 10 23:58:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 5.3 KiB/s wr, 119 op/s
Oct 10 23:58:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:58:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:58:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:58:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:58:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:58:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:58:43 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev c0ecf12c-04c8-482d-83ad-4d6a6dac72af does not exist
Oct 10 23:58:43 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev eb6c9cfd-abae-411c-977b-3cdccc2cbec2 does not exist
Oct 10 23:58:43 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev b7db19f7-3f69-4ee8-9c57-3ab6c91cbd60 does not exist
Oct 10 23:58:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:58:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:58:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:58:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:58:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:58:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:58:43 np0005480824 nova_compute[260089]: 2025-10-11 03:58:43.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:58:43 np0005480824 podman[293306]: 2025-10-11 03:58:43.595505254 +0000 UTC m=+0.064989417 container create df67dea267a94df158ae3eba26dbc1cd3cbd3533e3e387391030c2c5333d1519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 10 23:58:43 np0005480824 podman[293306]: 2025-10-11 03:58:43.552156317 +0000 UTC m=+0.021640510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:58:43 np0005480824 systemd[1]: Started libpod-conmon-df67dea267a94df158ae3eba26dbc1cd3cbd3533e3e387391030c2c5333d1519.scope.
Oct 10 23:58:43 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:58:43 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:58:43 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:58:43 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:58:43 np0005480824 podman[293306]: 2025-10-11 03:58:43.768522814 +0000 UTC m=+0.238007017 container init df67dea267a94df158ae3eba26dbc1cd3cbd3533e3e387391030c2c5333d1519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 23:58:43 np0005480824 podman[293306]: 2025-10-11 03:58:43.780984301 +0000 UTC m=+0.250468474 container start df67dea267a94df158ae3eba26dbc1cd3cbd3533e3e387391030c2c5333d1519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 10 23:58:43 np0005480824 adoring_nash[293322]: 167 167
Oct 10 23:58:43 np0005480824 systemd[1]: libpod-df67dea267a94df158ae3eba26dbc1cd3cbd3533e3e387391030c2c5333d1519.scope: Deactivated successfully.
Oct 10 23:58:43 np0005480824 conmon[293322]: conmon df67dea267a94df158ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df67dea267a94df158ae3eba26dbc1cd3cbd3533e3e387391030c2c5333d1519.scope/container/memory.events
Oct 10 23:58:43 np0005480824 podman[293306]: 2025-10-11 03:58:43.834587635 +0000 UTC m=+0.304071828 container attach df67dea267a94df158ae3eba26dbc1cd3cbd3533e3e387391030c2c5333d1519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:58:43 np0005480824 podman[293306]: 2025-10-11 03:58:43.836479098 +0000 UTC m=+0.305963321 container died df67dea267a94df158ae3eba26dbc1cd3cbd3533e3e387391030c2c5333d1519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:58:44 np0005480824 systemd[1]: var-lib-containers-storage-overlay-f9197f99960a8c47fe140fd664af23cde5dda5b3ab0840f6fb5168070878bc24-merged.mount: Deactivated successfully.
Oct 10 23:58:44 np0005480824 nova_compute[260089]: 2025-10-11 03:58:44.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:44 np0005480824 podman[293306]: 2025-10-11 03:58:44.043609953 +0000 UTC m=+0.513094136 container remove df67dea267a94df158ae3eba26dbc1cd3cbd3533e3e387391030c2c5333d1519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 10 23:58:44 np0005480824 systemd[1]: libpod-conmon-df67dea267a94df158ae3eba26dbc1cd3cbd3533e3e387391030c2c5333d1519.scope: Deactivated successfully.
Oct 10 23:58:44 np0005480824 nova_compute[260089]: 2025-10-11 03:58:44.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:44 np0005480824 podman[293348]: 2025-10-11 03:58:44.260576805 +0000 UTC m=+0.058628120 container create 3e52a0d6a6f419f24d5d2dcf814bc0b4216e7960ad5a1cde4afeba57b215f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_vaughan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 23:58:44 np0005480824 podman[293348]: 2025-10-11 03:58:44.229223634 +0000 UTC m=+0.027274989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:58:44 np0005480824 systemd[1]: Started libpod-conmon-3e52a0d6a6f419f24d5d2dcf814bc0b4216e7960ad5a1cde4afeba57b215f5dd.scope.
Oct 10 23:58:44 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:58:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e60c7e7d660ff22254b1d91478af744786378e09e8b22c5364c8d035afc28be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:58:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e60c7e7d660ff22254b1d91478af744786378e09e8b22c5364c8d035afc28be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:58:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e60c7e7d660ff22254b1d91478af744786378e09e8b22c5364c8d035afc28be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:58:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e60c7e7d660ff22254b1d91478af744786378e09e8b22c5364c8d035afc28be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:58:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e60c7e7d660ff22254b1d91478af744786378e09e8b22c5364c8d035afc28be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:58:44 np0005480824 podman[293348]: 2025-10-11 03:58:44.429757307 +0000 UTC m=+0.227808682 container init 3e52a0d6a6f419f24d5d2dcf814bc0b4216e7960ad5a1cde4afeba57b215f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_vaughan, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:58:44 np0005480824 podman[293348]: 2025-10-11 03:58:44.44340843 +0000 UTC m=+0.241459745 container start 3e52a0d6a6f419f24d5d2dcf814bc0b4216e7960ad5a1cde4afeba57b215f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_vaughan, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 23:58:44 np0005480824 podman[293348]: 2025-10-11 03:58:44.482799517 +0000 UTC m=+0.280850862 container attach 3e52a0d6a6f419f24d5d2dcf814bc0b4216e7960ad5a1cde4afeba57b215f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_vaughan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 23:58:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.2 KiB/s wr, 64 op/s
Oct 10 23:58:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:58:45 np0005480824 silly_vaughan[293365]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:58:45 np0005480824 silly_vaughan[293365]: --> relative data size: 1.0
Oct 10 23:58:45 np0005480824 silly_vaughan[293365]: --> All data devices are unavailable
Oct 10 23:58:45 np0005480824 systemd[1]: libpod-3e52a0d6a6f419f24d5d2dcf814bc0b4216e7960ad5a1cde4afeba57b215f5dd.scope: Deactivated successfully.
Oct 10 23:58:45 np0005480824 systemd[1]: libpod-3e52a0d6a6f419f24d5d2dcf814bc0b4216e7960ad5a1cde4afeba57b215f5dd.scope: Consumed 1.136s CPU time.
Oct 10 23:58:45 np0005480824 podman[293348]: 2025-10-11 03:58:45.695236581 +0000 UTC m=+1.493287936 container died 3e52a0d6a6f419f24d5d2dcf814bc0b4216e7960ad5a1cde4afeba57b215f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_vaughan, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 10 23:58:46 np0005480824 systemd[1]: var-lib-containers-storage-overlay-2e60c7e7d660ff22254b1d91478af744786378e09e8b22c5364c8d035afc28be-merged.mount: Deactivated successfully.
Oct 10 23:58:46 np0005480824 podman[293348]: 2025-10-11 03:58:46.485717636 +0000 UTC m=+2.283768981 container remove 3e52a0d6a6f419f24d5d2dcf814bc0b4216e7960ad5a1cde4afeba57b215f5dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 10 23:58:46 np0005480824 systemd[1]: libpod-conmon-3e52a0d6a6f419f24d5d2dcf814bc0b4216e7960ad5a1cde4afeba57b215f5dd.scope: Deactivated successfully.
Oct 10 23:58:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.0 KiB/s wr, 46 op/s
Oct 10 23:58:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4151049048' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4151049048' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:47 np0005480824 podman[293546]: 2025-10-11 03:58:47.262468456 +0000 UTC m=+0.028320613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:58:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:58:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3896195944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:58:47 np0005480824 podman[293546]: 2025-10-11 03:58:47.478406613 +0000 UTC m=+0.244258750 container create 6dbbef34efa46760283cd1c9b9c7d9ffaf994e29240908ee15476bbb5e6cb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_austin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:58:47 np0005480824 systemd[1]: Started libpod-conmon-6dbbef34efa46760283cd1c9b9c7d9ffaf994e29240908ee15476bbb5e6cb3b7.scope.
Oct 10 23:58:47 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:58:47 np0005480824 podman[293546]: 2025-10-11 03:58:47.762963089 +0000 UTC m=+0.528815306 container init 6dbbef34efa46760283cd1c9b9c7d9ffaf994e29240908ee15476bbb5e6cb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_austin, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 10 23:58:47 np0005480824 podman[293546]: 2025-10-11 03:58:47.775886258 +0000 UTC m=+0.541738425 container start 6dbbef34efa46760283cd1c9b9c7d9ffaf994e29240908ee15476bbb5e6cb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_austin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 10 23:58:47 np0005480824 practical_austin[293568]: 167 167
Oct 10 23:58:47 np0005480824 systemd[1]: libpod-6dbbef34efa46760283cd1c9b9c7d9ffaf994e29240908ee15476bbb5e6cb3b7.scope: Deactivated successfully.
Oct 10 23:58:47 np0005480824 podman[293546]: 2025-10-11 03:58:47.880098615 +0000 UTC m=+0.645950832 container attach 6dbbef34efa46760283cd1c9b9c7d9ffaf994e29240908ee15476bbb5e6cb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_austin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:58:47 np0005480824 podman[293546]: 2025-10-11 03:58:47.880818132 +0000 UTC m=+0.646670309 container died 6dbbef34efa46760283cd1c9b9c7d9ffaf994e29240908ee15476bbb5e6cb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_austin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:58:48 np0005480824 systemd[1]: var-lib-containers-storage-overlay-358373cf0cf1c65b63814380152ebb3edb3db0605914e1757f9cbcf646225b58-merged.mount: Deactivated successfully.
Oct 10 23:58:48 np0005480824 podman[293546]: 2025-10-11 03:58:48.244511098 +0000 UTC m=+1.010363245 container remove 6dbbef34efa46760283cd1c9b9c7d9ffaf994e29240908ee15476bbb5e6cb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:58:48 np0005480824 systemd[1]: libpod-conmon-6dbbef34efa46760283cd1c9b9c7d9ffaf994e29240908ee15476bbb5e6cb3b7.scope: Deactivated successfully.
Oct 10 23:58:48 np0005480824 podman[293560]: 2025-10-11 03:58:48.344193471 +0000 UTC m=+0.819700748 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 10 23:58:48 np0005480824 podman[293606]: 2025-10-11 03:58:48.437411466 +0000 UTC m=+0.076995142 container create 811f54e680002b68086f021aed4e80666fe0efce36ff8a5ce236598c2a62dfac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 10 23:58:48 np0005480824 podman[293606]: 2025-10-11 03:58:48.385740997 +0000 UTC m=+0.025324753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:58:48 np0005480824 systemd[1]: Started libpod-conmon-811f54e680002b68086f021aed4e80666fe0efce36ff8a5ce236598c2a62dfac.scope.
Oct 10 23:58:48 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:58:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7b07d47a1a035ec5f1cebca62025dde8bf36d2867e0498430d0f730ef54793/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:58:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7b07d47a1a035ec5f1cebca62025dde8bf36d2867e0498430d0f730ef54793/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:58:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7b07d47a1a035ec5f1cebca62025dde8bf36d2867e0498430d0f730ef54793/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:58:48 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7b07d47a1a035ec5f1cebca62025dde8bf36d2867e0498430d0f730ef54793/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:58:48 np0005480824 podman[293606]: 2025-10-11 03:58:48.5580078 +0000 UTC m=+0.197591476 container init 811f54e680002b68086f021aed4e80666fe0efce36ff8a5ce236598c2a62dfac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 10 23:58:48 np0005480824 podman[293606]: 2025-10-11 03:58:48.571957251 +0000 UTC m=+0.211540957 container start 811f54e680002b68086f021aed4e80666fe0efce36ff8a5ce236598c2a62dfac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 10 23:58:48 np0005480824 podman[293606]: 2025-10-11 03:58:48.590833646 +0000 UTC m=+0.230417342 container attach 811f54e680002b68086f021aed4e80666fe0efce36ff8a5ce236598c2a62dfac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 10 23:58:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.9 KiB/s wr, 63 op/s
Oct 10 23:58:49 np0005480824 nova_compute[260089]: 2025-10-11 03:58:49.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:49 np0005480824 nova_compute[260089]: 2025-10-11 03:58:49.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Oct 10 23:58:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Oct 10 23:58:49 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]: {
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:    "0": [
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:        {
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "devices": [
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "/dev/loop3"
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            ],
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_name": "ceph_lv0",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_size": "21470642176",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "name": "ceph_lv0",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "tags": {
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.cluster_name": "ceph",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.crush_device_class": "",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.encrypted": "0",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.osd_id": "0",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.type": "block",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.vdo": "0"
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            },
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "type": "block",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "vg_name": "ceph_vg0"
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:        }
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:    ],
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:    "1": [
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:        {
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "devices": [
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "/dev/loop4"
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            ],
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_name": "ceph_lv1",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_size": "21470642176",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "name": "ceph_lv1",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "tags": {
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.cluster_name": "ceph",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.crush_device_class": "",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.encrypted": "0",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.osd_id": "1",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.type": "block",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.vdo": "0"
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            },
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "type": "block",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "vg_name": "ceph_vg1"
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:        }
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:    ],
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:    "2": [
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:        {
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "devices": [
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "/dev/loop5"
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            ],
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_name": "ceph_lv2",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_size": "21470642176",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "name": "ceph_lv2",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "tags": {
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.cluster_name": "ceph",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.crush_device_class": "",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.encrypted": "0",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.osd_id": "2",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.type": "block",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:                "ceph.vdo": "0"
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            },
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "type": "block",
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:            "vg_name": "ceph_vg2"
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:        }
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]:    ]
Oct 10 23:58:49 np0005480824 priceless_robinson[293624]: }
Oct 10 23:58:49 np0005480824 podman[293606]: 2025-10-11 03:58:49.403627794 +0000 UTC m=+1.043211510 container died 811f54e680002b68086f021aed4e80666fe0efce36ff8a5ce236598c2a62dfac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:58:49 np0005480824 systemd[1]: libpod-811f54e680002b68086f021aed4e80666fe0efce36ff8a5ce236598c2a62dfac.scope: Deactivated successfully.
Oct 10 23:58:49 np0005480824 systemd[1]: var-lib-containers-storage-overlay-1e7b07d47a1a035ec5f1cebca62025dde8bf36d2867e0498430d0f730ef54793-merged.mount: Deactivated successfully.
Oct 10 23:58:49 np0005480824 podman[293606]: 2025-10-11 03:58:49.463337168 +0000 UTC m=+1.102920834 container remove 811f54e680002b68086f021aed4e80666fe0efce36ff8a5ce236598c2a62dfac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_robinson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:58:49 np0005480824 systemd[1]: libpod-conmon-811f54e680002b68086f021aed4e80666fe0efce36ff8a5ce236598c2a62dfac.scope: Deactivated successfully.
Oct 10 23:58:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/705093031' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/705093031' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:50 np0005480824 podman[293786]: 2025-10-11 03:58:50.131822887 +0000 UTC m=+0.040192045 container create 8faf9009da4c7a356e66fe69fba39b8100c7ce9569da366798ac56560152de50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dubinsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:58:50 np0005480824 systemd[1]: Started libpod-conmon-8faf9009da4c7a356e66fe69fba39b8100c7ce9569da366798ac56560152de50.scope.
Oct 10 23:58:50 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:58:50 np0005480824 podman[293786]: 2025-10-11 03:58:50.114418677 +0000 UTC m=+0.022787855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:58:50 np0005480824 podman[293786]: 2025-10-11 03:58:50.218286126 +0000 UTC m=+0.126655314 container init 8faf9009da4c7a356e66fe69fba39b8100c7ce9569da366798ac56560152de50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 23:58:50 np0005480824 podman[293786]: 2025-10-11 03:58:50.225628145 +0000 UTC m=+0.133997313 container start 8faf9009da4c7a356e66fe69fba39b8100c7ce9569da366798ac56560152de50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:58:50 np0005480824 podman[293786]: 2025-10-11 03:58:50.229535425 +0000 UTC m=+0.137904613 container attach 8faf9009da4c7a356e66fe69fba39b8100c7ce9569da366798ac56560152de50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dubinsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:58:50 np0005480824 pensive_dubinsky[293802]: 167 167
Oct 10 23:58:50 np0005480824 systemd[1]: libpod-8faf9009da4c7a356e66fe69fba39b8100c7ce9569da366798ac56560152de50.scope: Deactivated successfully.
Oct 10 23:58:50 np0005480824 podman[293786]: 2025-10-11 03:58:50.234683934 +0000 UTC m=+0.143053132 container died 8faf9009da4c7a356e66fe69fba39b8100c7ce9569da366798ac56560152de50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dubinsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 10 23:58:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Oct 10 23:58:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Oct 10 23:58:50 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Oct 10 23:58:50 np0005480824 systemd[1]: var-lib-containers-storage-overlay-14d3dd067a786177a4a22cdc6e9bf8813572a44406d1bce088a6c04ecca3ccb3-merged.mount: Deactivated successfully.
Oct 10 23:58:50 np0005480824 podman[293786]: 2025-10-11 03:58:50.332670348 +0000 UTC m=+0.241039506 container remove 8faf9009da4c7a356e66fe69fba39b8100c7ce9569da366798ac56560152de50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dubinsky, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 10 23:58:50 np0005480824 systemd[1]: libpod-conmon-8faf9009da4c7a356e66fe69fba39b8100c7ce9569da366798ac56560152de50.scope: Deactivated successfully.
Oct 10 23:58:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:58:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Oct 10 23:58:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Oct 10 23:58:50 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Oct 10 23:58:50 np0005480824 podman[293830]: 2025-10-11 03:58:50.502641838 +0000 UTC m=+0.041578628 container create 72a8a483b98c11ba5f861790b48a92fe5df44aadb2653b31176506a07c92e4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 10 23:58:50 np0005480824 systemd[1]: Started libpod-conmon-72a8a483b98c11ba5f861790b48a92fe5df44aadb2653b31176506a07c92e4b5.scope.
Oct 10 23:58:50 np0005480824 podman[293830]: 2025-10-11 03:58:50.483394905 +0000 UTC m=+0.022331725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:58:50 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:58:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcc33fb15f6277ac73f3c4a7165bfb68402972b6852fb831c6a0ed06aff6512/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:58:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcc33fb15f6277ac73f3c4a7165bfb68402972b6852fb831c6a0ed06aff6512/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:58:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcc33fb15f6277ac73f3c4a7165bfb68402972b6852fb831c6a0ed06aff6512/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:58:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcc33fb15f6277ac73f3c4a7165bfb68402972b6852fb831c6a0ed06aff6512/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:58:50 np0005480824 podman[293830]: 2025-10-11 03:58:50.598457663 +0000 UTC m=+0.137394473 container init 72a8a483b98c11ba5f861790b48a92fe5df44aadb2653b31176506a07c92e4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 23:58:50 np0005480824 podman[293830]: 2025-10-11 03:58:50.611225347 +0000 UTC m=+0.150162127 container start 72a8a483b98c11ba5f861790b48a92fe5df44aadb2653b31176506a07c92e4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_agnesi, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:58:50 np0005480824 podman[293830]: 2025-10-11 03:58:50.614476221 +0000 UTC m=+0.153413051 container attach 72a8a483b98c11ba5f861790b48a92fe5df44aadb2653b31176506a07c92e4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:58:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 KiB/s wr, 34 op/s
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]: {
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "osd_id": 0,
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "type": "bluestore"
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:    },
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "osd_id": 1,
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "type": "bluestore"
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:    },
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "osd_id": 2,
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:        "type": "bluestore"
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]:    }
Oct 10 23:58:51 np0005480824 elated_agnesi[293846]: }
Oct 10 23:58:51 np0005480824 systemd[1]: libpod-72a8a483b98c11ba5f861790b48a92fe5df44aadb2653b31176506a07c92e4b5.scope: Deactivated successfully.
Oct 10 23:58:51 np0005480824 systemd[1]: libpod-72a8a483b98c11ba5f861790b48a92fe5df44aadb2653b31176506a07c92e4b5.scope: Consumed 1.011s CPU time.
Oct 10 23:58:51 np0005480824 conmon[293846]: conmon 72a8a483b98c11ba5f86 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-72a8a483b98c11ba5f861790b48a92fe5df44aadb2653b31176506a07c92e4b5.scope/container/memory.events
Oct 10 23:58:51 np0005480824 podman[293879]: 2025-10-11 03:58:51.661521449 +0000 UTC m=+0.028544208 container died 72a8a483b98c11ba5f861790b48a92fe5df44aadb2653b31176506a07c92e4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:58:51 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6dcc33fb15f6277ac73f3c4a7165bfb68402972b6852fb831c6a0ed06aff6512-merged.mount: Deactivated successfully.
Oct 10 23:58:51 np0005480824 podman[293879]: 2025-10-11 03:58:51.735160033 +0000 UTC m=+0.102182752 container remove 72a8a483b98c11ba5f861790b48a92fe5df44aadb2653b31176506a07c92e4b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_agnesi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 23:58:51 np0005480824 systemd[1]: libpod-conmon-72a8a483b98c11ba5f861790b48a92fe5df44aadb2653b31176506a07c92e4b5.scope: Deactivated successfully.
Oct 10 23:58:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:58:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3215860411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:58:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 10 23:58:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:58:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 10 23:58:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:58:51 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev d8b1530d-971d-43e8-8d07-193874e7823e does not exist
Oct 10 23:58:51 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev fbec5f08-391e-47e5-a636-508b362fb34f does not exist
Oct 10 23:58:52 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:58:52 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:58:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Oct 10 23:58:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 124 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 6.0 MiB/s wr, 142 op/s
Oct 10 23:58:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Oct 10 23:58:52 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Oct 10 23:58:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1238647235' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1238647235' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:54 np0005480824 nova_compute[260089]: 2025-10-11 03:58:54.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:54 np0005480824 nova_compute[260089]: 2025-10-11 03:58:54.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 124 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 6.4 MiB/s wr, 125 op/s
Oct 10 23:58:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:58:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:58:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2657439416' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:58:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:58:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2657439416' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:58:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:58:55.572 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:58:55 np0005480824 nova_compute[260089]: 2025-10-11 03:58:55.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:58:55.574 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:58:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 312 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 34 MiB/s wr, 126 op/s
Oct 10 23:58:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:58:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:58:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:58:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:58:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:58:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:58:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 680 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 177 KiB/s rd, 71 MiB/s wr, 280 op/s
Oct 10 23:58:59 np0005480824 nova_compute[260089]: 2025-10-11 03:58:59.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:58:59 np0005480824 nova_compute[260089]: 2025-10-11 03:58:59.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:59:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Oct 10 23:59:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Oct 10 23:59:00 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Oct 10 23:59:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 680 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 123 KiB/s rd, 70 MiB/s wr, 206 op/s
Oct 10 23:59:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Oct 10 23:59:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Oct 10 23:59:01 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Oct 10 23:59:02 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:02.575 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:59:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 200 KiB/s rd, 124 MiB/s wr, 341 op/s
Oct 10 23:59:04 np0005480824 nova_compute[260089]: 2025-10-11 03:59:04.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:04 np0005480824 nova_compute[260089]: 2025-10-11 03:59:04.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 100 MiB/s wr, 324 op/s
Oct 10 23:59:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:59:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4249117941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:59:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:59:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Oct 10 23:59:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Oct 10 23:59:05 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Oct 10 23:59:06 np0005480824 podman[293946]: 2025-10-11 03:59:06.023982799 +0000 UTC m=+0.071658920 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 23:59:06 np0005480824 podman[293947]: 2025-10-11 03:59:06.035580436 +0000 UTC m=+0.083220596 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 10 23:59:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Oct 10 23:59:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Oct 10 23:59:06 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Oct 10 23:59:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 118 KiB/s rd, 72 MiB/s wr, 201 op/s
Oct 10 23:59:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Oct 10 23:59:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Oct 10 23:59:07 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Oct 10 23:59:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 41 op/s
Oct 10 23:59:09 np0005480824 nova_compute[260089]: 2025-10-11 03:59:09.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:09 np0005480824 nova_compute[260089]: 2025-10-11 03:59:09.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:59:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/661651060' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:59:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:10.504 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:10.504 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:10.504 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:59:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Oct 10 23:59:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Oct 10 23:59:10 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Oct 10 23:59:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.6 KiB/s wr, 46 op/s
Oct 10 23:59:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Oct 10 23:59:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Oct 10 23:59:11 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Oct 10 23:59:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:59:12 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2811163781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:59:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.5 KiB/s wr, 65 op/s
Oct 10 23:59:12 np0005480824 ovn_controller[152667]: 2025-10-11T03:59:12Z|00201|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 10 23:59:13 np0005480824 podman[293988]: 2025-10-11 03:59:13.032406683 +0000 UTC m=+0.093085922 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:59:14 np0005480824 nova_compute[260089]: 2025-10-11 03:59:14.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:14 np0005480824 nova_compute[260089]: 2025-10-11 03:59:14.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 321 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 KiB/s wr, 37 op/s
Oct 10 23:59:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:59:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 1.2 GiB data, 1.5 GiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 16 MiB/s wr, 136 op/s
Oct 10 23:59:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 1.5 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 96 KiB/s rd, 51 MiB/s wr, 155 op/s
Oct 10 23:59:19 np0005480824 podman[294016]: 2025-10-11 03:59:19.01989381 +0000 UTC m=+0.075895287 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 10 23:59:19 np0005480824 nova_compute[260089]: 2025-10-11 03:59:19.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:19 np0005480824 nova_compute[260089]: 2025-10-11 03:59:19.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:59:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 1.5 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 80 KiB/s rd, 42 MiB/s wr, 128 op/s
Oct 10 23:59:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 1.9 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 116 KiB/s rd, 70 MiB/s wr, 194 op/s
Oct 10 23:59:24 np0005480824 nova_compute[260089]: 2025-10-11 03:59:24.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:24 np0005480824 nova_compute[260089]: 2025-10-11 03:59:24.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 10 23:59:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1077551571' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 23:59:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 10 23:59:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1077551571' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 23:59:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 1.9 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 92 KiB/s rd, 66 MiB/s wr, 160 op/s
Oct 10 23:59:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:59:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Oct 10 23:59:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Oct 10 23:59:25 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Oct 10 23:59:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Oct 10 23:59:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Oct 10 23:59:26 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Oct 10 23:59:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 2.0 GiB data, 2.2 GiB used, 58 GiB / 60 GiB avail; 101 KiB/s rd, 61 MiB/s wr, 177 op/s
Oct 10 23:59:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:59:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:59:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:59:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:59:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:59:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:59:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_03:59:27
Oct 10 23:59:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 10 23:59:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 10 23:59:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'volumes', 'vms', '.mgr', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta']
Oct 10 23:59:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 10 23:59:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 10 23:59:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:59:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 10 23:59:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 10 23:59:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:59:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 10 23:59:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:59:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 10 23:59:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:59:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 10 23:59:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 130 KiB/s rd, 76 MiB/s wr, 218 op/s
Oct 10 23:59:29 np0005480824 nova_compute[260089]: 2025-10-11 03:59:29.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:29 np0005480824 nova_compute[260089]: 2025-10-11 03:59:29.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:59:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Oct 10 23:59:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Oct 10 23:59:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Oct 10 23:59:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 88 KiB/s rd, 38 MiB/s wr, 140 op/s
Oct 10 23:59:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 31 MiB/s wr, 130 op/s
Oct 10 23:59:34 np0005480824 nova_compute[260089]: 2025-10-11 03:59:34.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:59:34 np0005480824 nova_compute[260089]: 2025-10-11 03:59:34.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:59:34 np0005480824 nova_compute[260089]: 2025-10-11 03:59:34.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 10 23:59:34 np0005480824 nova_compute[260089]: 2025-10-11 03:59:34.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 10 23:59:34 np0005480824 nova_compute[260089]: 2025-10-11 03:59:34.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:34 np0005480824 nova_compute[260089]: 2025-10-11 03:59:34.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 10 23:59:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1608: 321 pgs: 321 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.6 MiB/s rd, 28 MiB/s wr, 116 op/s
Oct 10 23:59:35 np0005480824 nova_compute[260089]: 2025-10-11 03:59:35.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:59:35 np0005480824 nova_compute[260089]: 2025-10-11 03:59:35.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:59:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:59:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:59:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4013581228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.330 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.330 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.330 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.331 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.331 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.655 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.656 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.686 2 DEBUG nova.compute.manager [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:59:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:59:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3678544392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.731 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.783 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.783 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.790 2 DEBUG nova.virt.hardware [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.791 2 INFO nova.compute.claims [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:59:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 13 MiB/s wr, 57 op/s
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.899 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.901 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4472MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.901 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:36 np0005480824 nova_compute[260089]: 2025-10-11 03:59:36.905 2 DEBUG oslo_concurrency.processutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Oct 10 23:59:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Oct 10 23:59:36 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Oct 10 23:59:37 np0005480824 podman[294058]: 2025-10-11 03:59:37.135567244 +0000 UTC m=+0.074657758 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 10 23:59:37 np0005480824 podman[294059]: 2025-10-11 03:59:37.166156148 +0000 UTC m=+0.105303773 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 23:59:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:59:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4212121723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.377 2 DEBUG oslo_concurrency.processutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.383 2 DEBUG nova.compute.provider_tree [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.404 2 DEBUG nova.scheduler.client.report [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.428 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.429 2 DEBUG nova.compute.manager [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.431 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.490 2 DEBUG nova.compute.manager [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.491 2 DEBUG nova.network.neutron [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.510 2 INFO nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.513 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.513 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.513 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.530 2 DEBUG nova.compute.manager [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.562 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.629 2 DEBUG nova.compute.manager [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.631 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.631 2 INFO nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Creating image(s)#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.657 2 DEBUG nova.storage.rbd_utils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] rbd image d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.682 2 DEBUG nova.storage.rbd_utils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] rbd image d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.706 2 DEBUG nova.storage.rbd_utils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] rbd image d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.710 2 DEBUG oslo_concurrency.processutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.734 2 DEBUG nova.policy [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5d742fae0903462eaf9109fdb5176357', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4dd4975fff494ac1b725d3dfb95c6006', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.770 2 DEBUG oslo_concurrency.processutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.770 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.771 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.771 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.792 2 DEBUG nova.storage.rbd_utils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] rbd image d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.796 2 DEBUG oslo_concurrency.processutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:59:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3515567979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:59:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.986 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:37 np0005480824 nova_compute[260089]: 2025-10-11 03:59:37.992 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:59:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Oct 10 23:59:38 np0005480824 nova_compute[260089]: 2025-10-11 03:59:38.008 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:59:38 np0005480824 nova_compute[260089]: 2025-10-11 03:59:38.025 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 23:59:38 np0005480824 nova_compute[260089]: 2025-10-11 03:59:38.026 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:38 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.03403349246272058 of space, bias 1.0, pg target 10.210047738816176 quantized to 32 (current 32)
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.00018549409471250896 of space, bias 1.0, pg target 0.0537932874666276 quantized to 32 (current 32)
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19319111398710687 quantized to 32 (current 32)
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0005901217685745913 quantized to 16 (current 32)
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.376522107182392e-05 quantized to 32 (current 32)
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006270043791105033 quantized to 32 (current 32)
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Oct 10 23:59:38 np0005480824 nova_compute[260089]: 2025-10-11 03:59:38.099 2 DEBUG oslo_concurrency.processutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:38 np0005480824 nova_compute[260089]: 2025-10-11 03:59:38.159 2 DEBUG nova.storage.rbd_utils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] resizing rbd image d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 23:59:38 np0005480824 nova_compute[260089]: 2025-10-11 03:59:38.257 2 DEBUG nova.objects.instance [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'migration_context' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:59:38 np0005480824 nova_compute[260089]: 2025-10-11 03:59:38.271 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:59:38 np0005480824 nova_compute[260089]: 2025-10-11 03:59:38.271 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Ensure instance console log exists: /var/lib/nova/instances/d5aa10c6-5a8f-419f-8f0d-89bc251d13b1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:59:38 np0005480824 nova_compute[260089]: 2025-10-11 03:59:38.271 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:38 np0005480824 nova_compute[260089]: 2025-10-11 03:59:38.272 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:38 np0005480824 nova_compute[260089]: 2025-10-11 03:59:38.272 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:38 np0005480824 nova_compute[260089]: 2025-10-11 03:59:38.441 2 DEBUG nova.network.neutron [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Successfully created port: bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:59:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 6.4 MiB/s rd, 4.1 MiB/s wr, 108 op/s
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.021 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.022 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.022 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.022 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.077 2 DEBUG nova.network.neutron [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Successfully updated port: bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.091 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "refresh_cache-d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.092 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquired lock "refresh_cache-d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.092 2 DEBUG nova.network.neutron [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.180 2 DEBUG nova.compute.manager [req-8689c9f7-18b0-4d5c-8b54-ffdf77d1d370 req-7b1983dd-6e45-4874-bd5e-fefec261e1df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Received event network-changed-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.180 2 DEBUG nova.compute.manager [req-8689c9f7-18b0-4d5c-8b54-ffdf77d1d370 req-7b1983dd-6e45-4874-bd5e-fefec261e1df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Refreshing instance network info cache due to event network-changed-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.181 2 DEBUG oslo_concurrency.lockutils [req-8689c9f7-18b0-4d5c-8b54-ffdf77d1d370 req-7b1983dd-6e45-4874-bd5e-fefec261e1df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.248 2 DEBUG nova.network.neutron [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:59:39 np0005480824 nova_compute[260089]: 2025-10-11 03:59:39.989 2 DEBUG nova.network.neutron [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Updating instance_info_cache with network_info: [{"id": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "address": "fa:16:3e:91:5e:e0", "network": {"id": "b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-707028039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dd4975fff494ac1b725d3dfb95c6006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfcdfd4b-fc", "ovs_interfaceid": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.036 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Releasing lock "refresh_cache-d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.036 2 DEBUG nova.compute.manager [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Instance network_info: |[{"id": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "address": "fa:16:3e:91:5e:e0", "network": {"id": "b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-707028039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dd4975fff494ac1b725d3dfb95c6006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfcdfd4b-fc", "ovs_interfaceid": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.036 2 DEBUG oslo_concurrency.lockutils [req-8689c9f7-18b0-4d5c-8b54-ffdf77d1d370 req-7b1983dd-6e45-4874-bd5e-fefec261e1df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.037 2 DEBUG nova.network.neutron [req-8689c9f7-18b0-4d5c-8b54-ffdf77d1d370 req-7b1983dd-6e45-4874-bd5e-fefec261e1df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Refreshing network info cache for port bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.039 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Start _get_guest_xml network_info=[{"id": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "address": "fa:16:3e:91:5e:e0", "network": {"id": "b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-707028039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dd4975fff494ac1b725d3dfb95c6006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfcdfd4b-fc", "ovs_interfaceid": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.043 2 WARNING nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.047 2 DEBUG nova.virt.libvirt.host [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.048 2 DEBUG nova.virt.libvirt.host [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.052 2 DEBUG nova.virt.libvirt.host [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.052 2 DEBUG nova.virt.libvirt.host [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.053 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.053 2 DEBUG nova.virt.hardware [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.053 2 DEBUG nova.virt.hardware [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.053 2 DEBUG nova.virt.hardware [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.054 2 DEBUG nova.virt.hardware [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.054 2 DEBUG nova.virt.hardware [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.054 2 DEBUG nova.virt.hardware [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.054 2 DEBUG nova.virt.hardware [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.054 2 DEBUG nova.virt.hardware [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.055 2 DEBUG nova.virt.hardware [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.055 2 DEBUG nova.virt.hardware [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.055 2 DEBUG nova.virt.hardware [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.057 2 DEBUG oslo_concurrency.processutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:59:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011285196' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.489 2 DEBUG oslo_concurrency.processutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.518 2 DEBUG nova.storage.rbd_utils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] rbd image d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.522 2 DEBUG oslo_concurrency.processutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:59:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1613: 321 pgs: 321 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.1 MiB/s wr, 95 op/s
Oct 10 23:59:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:59:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3155388987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.940 2 DEBUG oslo_concurrency.processutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.943 2 DEBUG nova.virt.libvirt.vif [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:59:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-904123251',display_name='tempest-SnapshotDataIntegrityTests-server-904123251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-904123251',id=22,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKuna6dqBW7XaVzn9KR64NaVEmsQ5ulNl9/aDNcPGKoJrbjwghQAc5yJxj76ka5H3pzcoTC+gcMjG5T/OgM2nFxnE1kE2FMmYCpZF82zIpeYZgF/1YNvbKCgNcN4k8m/JQ==',key_name='tempest-keypair-1580539450',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4dd4975fff494ac1b725d3dfb95c6006',ramdisk_id='',reservation_id='r-lvzdhhy7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-1651128782',owner_user_name='tempest-SnapshotDataIntegrityTests-1651128782-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:59:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5d742fae0903462eaf9109fdb5176357',uuid=d5aa10c6-5a8f-419f-8f0d-89bc251d13b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "address": "fa:16:3e:91:5e:e0", "network": {"id": "b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-707028039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dd4975fff494ac1b725d3dfb95c6006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfcdfd4b-fc", "ovs_interfaceid": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.944 2 DEBUG nova.network.os_vif_util [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Converting VIF {"id": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "address": "fa:16:3e:91:5e:e0", "network": {"id": "b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-707028039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dd4975fff494ac1b725d3dfb95c6006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfcdfd4b-fc", "ovs_interfaceid": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.945 2 DEBUG nova.network.os_vif_util [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:5e:e0,bridge_name='br-int',has_traffic_filtering=True,id=bfcdfd4b-fcfe-45df-af5d-b65bf0a23633,network=Network(b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfcdfd4b-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.947 2 DEBUG nova.objects.instance [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'pci_devices' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.963 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  <uuid>d5aa10c6-5a8f-419f-8f0d-89bc251d13b1</uuid>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  <name>instance-00000016</name>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <nova:name>tempest-SnapshotDataIntegrityTests-server-904123251</nova:name>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:59:40</nova:creationTime>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:        <nova:user uuid="5d742fae0903462eaf9109fdb5176357">tempest-SnapshotDataIntegrityTests-1651128782-project-member</nova:user>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:        <nova:project uuid="4dd4975fff494ac1b725d3dfb95c6006">tempest-SnapshotDataIntegrityTests-1651128782</nova:project>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:        <nova:port uuid="bfcdfd4b-fcfe-45df-af5d-b65bf0a23633">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <entry name="serial">d5aa10c6-5a8f-419f-8f0d-89bc251d13b1</entry>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <entry name="uuid">d5aa10c6-5a8f-419f-8f0d-89bc251d13b1</entry>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_disk">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_disk.config">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:91:5e:e0"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <target dev="tapbfcdfd4b-fc"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/d5aa10c6-5a8f-419f-8f0d-89bc251d13b1/console.log" append="off"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:59:40 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:59:40 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:59:40 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:59:40 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.964 2 DEBUG nova.compute.manager [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Preparing to wait for external event network-vif-plugged-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.964 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.965 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.965 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.966 2 DEBUG nova.virt.libvirt.vif [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:59:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-904123251',display_name='tempest-SnapshotDataIntegrityTests-server-904123251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-904123251',id=22,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKuna6dqBW7XaVzn9KR64NaVEmsQ5ulNl9/aDNcPGKoJrbjwghQAc5yJxj76ka5H3pzcoTC+gcMjG5T/OgM2nFxnE1kE2FMmYCpZF82zIpeYZgF/1YNvbKCgNcN4k8m/JQ==',key_name='tempest-keypair-1580539450',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4dd4975fff494ac1b725d3dfb95c6006',ramdisk_id='',reservation_id='r-lvzdhhy7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-1651128782',owner_user_name='tempest-SnapshotDataIntegrityTests-1651128782-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:59:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5d742fae0903462eaf9109fdb5176357',uuid=d5aa10c6-5a8f-419f-8f0d-89bc251d13b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "address": "fa:16:3e:91:5e:e0", "network": {"id": "b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-707028039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dd4975fff494ac1b725d3dfb95c6006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfcdfd4b-fc", "ovs_interfaceid": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.967 2 DEBUG nova.network.os_vif_util [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Converting VIF {"id": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "address": "fa:16:3e:91:5e:e0", "network": {"id": "b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-707028039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dd4975fff494ac1b725d3dfb95c6006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfcdfd4b-fc", "ovs_interfaceid": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.968 2 DEBUG nova.network.os_vif_util [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:5e:e0,bridge_name='br-int',has_traffic_filtering=True,id=bfcdfd4b-fcfe-45df-af5d-b65bf0a23633,network=Network(b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfcdfd4b-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.968 2 DEBUG os_vif [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:5e:e0,bridge_name='br-int',has_traffic_filtering=True,id=bfcdfd4b-fcfe-45df-af5d-b65bf0a23633,network=Network(b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfcdfd4b-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.971 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.976 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfcdfd4b-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.977 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbfcdfd4b-fc, col_values=(('external_ids', {'iface-id': 'bfcdfd4b-fcfe-45df-af5d-b65bf0a23633', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:5e:e0', 'vm-uuid': 'd5aa10c6-5a8f-419f-8f0d-89bc251d13b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:40 np0005480824 NetworkManager[44969]: <info>  [1760155180.9799] manager: (tapbfcdfd4b-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:40 np0005480824 nova_compute[260089]: 2025-10-11 03:59:40.987 2 INFO os_vif [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:5e:e0,bridge_name='br-int',has_traffic_filtering=True,id=bfcdfd4b-fcfe-45df-af5d-b65bf0a23633,network=Network(b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfcdfd4b-fc')#033[00m
Oct 10 23:59:41 np0005480824 nova_compute[260089]: 2025-10-11 03:59:41.037 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:59:41 np0005480824 nova_compute[260089]: 2025-10-11 03:59:41.038 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:59:41 np0005480824 nova_compute[260089]: 2025-10-11 03:59:41.038 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No VIF found with MAC fa:16:3e:91:5e:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:59:41 np0005480824 nova_compute[260089]: 2025-10-11 03:59:41.039 2 INFO nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Using config drive#033[00m
Oct 10 23:59:41 np0005480824 nova_compute[260089]: 2025-10-11 03:59:41.056 2 DEBUG nova.storage.rbd_utils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] rbd image d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:59:41 np0005480824 nova_compute[260089]: 2025-10-11 03:59:41.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:59:41 np0005480824 nova_compute[260089]: 2025-10-11 03:59:41.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 23:59:41 np0005480824 nova_compute[260089]: 2025-10-11 03:59:41.298 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 23:59:41 np0005480824 nova_compute[260089]: 2025-10-11 03:59:41.335 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 10 23:59:41 np0005480824 nova_compute[260089]: 2025-10-11 03:59:41.336 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 23:59:41 np0005480824 nova_compute[260089]: 2025-10-11 03:59:41.336 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:59:42 np0005480824 nova_compute[260089]: 2025-10-11 03:59:42.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:59:42 np0005480824 nova_compute[260089]: 2025-10-11 03:59:42.692 2 INFO nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Creating config drive at /var/lib/nova/instances/d5aa10c6-5a8f-419f-8f0d-89bc251d13b1/disk.config#033[00m
Oct 10 23:59:42 np0005480824 nova_compute[260089]: 2025-10-11 03:59:42.706 2 DEBUG oslo_concurrency.processutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d5aa10c6-5a8f-419f-8f0d-89bc251d13b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvky1pu9a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:42 np0005480824 nova_compute[260089]: 2025-10-11 03:59:42.862 2 DEBUG oslo_concurrency.processutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d5aa10c6-5a8f-419f-8f0d-89bc251d13b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvky1pu9a" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 5.3 MiB/s rd, 8.0 MiB/s wr, 145 op/s
Oct 10 23:59:42 np0005480824 nova_compute[260089]: 2025-10-11 03:59:42.900 2 DEBUG nova.storage.rbd_utils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] rbd image d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:59:42 np0005480824 nova_compute[260089]: 2025-10-11 03:59:42.904 2 DEBUG oslo_concurrency.processutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d5aa10c6-5a8f-419f-8f0d-89bc251d13b1/disk.config d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.102 2 DEBUG oslo_concurrency.processutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d5aa10c6-5a8f-419f-8f0d-89bc251d13b1/disk.config d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.104 2 INFO nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Deleting local config drive /var/lib/nova/instances/d5aa10c6-5a8f-419f-8f0d-89bc251d13b1/disk.config because it was imported into RBD.#033[00m
Oct 10 23:59:43 np0005480824 kernel: tapbfcdfd4b-fc: entered promiscuous mode
Oct 10 23:59:43 np0005480824 NetworkManager[44969]: <info>  [1760155183.1867] manager: (tapbfcdfd4b-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:43 np0005480824 ovn_controller[152667]: 2025-10-11T03:59:43Z|00202|binding|INFO|Claiming lport bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 for this chassis.
Oct 10 23:59:43 np0005480824 ovn_controller[152667]: 2025-10-11T03:59:43Z|00203|binding|INFO|bfcdfd4b-fcfe-45df-af5d-b65bf0a23633: Claiming fa:16:3e:91:5e:e0 10.100.0.3
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.204 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:5e:e0 10.100.0.3'], port_security=['fa:16:3e:91:5e:e0 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd5aa10c6-5a8f-419f-8f0d-89bc251d13b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4dd4975fff494ac1b725d3dfb95c6006', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e4ef8f7d-3ac8-4d30-8829-c4ed9b98b54a e9a34696-927d-4453-87ad-83f2f968d44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ab2ca03-a847-453e-af7d-73f5101b8a17, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=bfcdfd4b-fcfe-45df-af5d-b65bf0a23633) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.205 162245 INFO neutron.agent.ovn.metadata.agent [-] Port bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 in datapath b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc bound to our chassis#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.207 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.222 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[08015725-3cb7-446c-b873-5df160906d70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.223 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb07c8c86-71 in ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.226 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb07c8c86-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.227 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ef93d2-e0e1-4476-8e93-4efc795e8dfa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.228 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7bc404-a6e6-46a5-b089-81dce9f54c3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 systemd-machined[215071]: New machine qemu-22-instance-00000016.
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.248 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[89a26eb7-698a-409e-809d-9cbb7cc54326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.276 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[114e05d9-8835-4f19-ae77-bcd53c032da5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 systemd[1]: Started Virtual Machine qemu-22-instance-00000016.
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:43 np0005480824 ovn_controller[152667]: 2025-10-11T03:59:43Z|00204|binding|INFO|Setting lport bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 ovn-installed in OVS
Oct 10 23:59:43 np0005480824 ovn_controller[152667]: 2025-10-11T03:59:43Z|00205|binding|INFO|Setting lport bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 up in Southbound
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.309 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:59:43 np0005480824 systemd-udevd[294459]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.326 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[64b4dabc-0fec-4ed4-bef5-60a2a66bf6b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 NetworkManager[44969]: <info>  [1760155183.3363] manager: (tapb07c8c86-70): new Veth device (/org/freedesktop/NetworkManager/Devices/111)
Oct 10 23:59:43 np0005480824 NetworkManager[44969]: <info>  [1760155183.3388] device (tapbfcdfd4b-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.334 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[db63c032-f8a0-4677-9226-7998612b10fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 NetworkManager[44969]: <info>  [1760155183.3428] device (tapbfcdfd4b-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:59:43 np0005480824 systemd-udevd[294470]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:59:43 np0005480824 podman[294439]: 2025-10-11 03:59:43.37813759 +0000 UTC m=+0.146783348 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.384 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5de55f-fd0a-4a3c-8bb5-ac8c89a6b01f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.388 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[0aab04c0-712e-44b6-9fbb-468a8056767f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 NetworkManager[44969]: <info>  [1760155183.4166] device (tapb07c8c86-70): carrier: link connected
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.423 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a49dae-c57c-442e-92d3-e38ede6554b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.445 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[335feae7-3b86-4593-8e7f-266aa89d885d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb07c8c86-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:bf:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465107, 'reachable_time': 22444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294500, 'error': None, 'target': 'ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.470 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[bf87bdf8-345f-491a-b1f0-9146cc75b6ac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:bf15'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465107, 'tstamp': 465107}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294501, 'error': None, 'target': 'ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.491 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c280b7-48b2-487f-af54-f7b9d06a79fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb07c8c86-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:bf:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465107, 'reachable_time': 22444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294502, 'error': None, 'target': 'ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.533 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9bbabd-0343-4094-8866-31c2a9fec31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.598 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[bfca069d-cf0e-4480-974b-85e1af4834cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.600 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07c8c86-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.600 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.600 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb07c8c86-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:59:43 np0005480824 NetworkManager[44969]: <info>  [1760155183.6028] manager: (tapb07c8c86-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:43 np0005480824 kernel: tapb07c8c86-70: entered promiscuous mode
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.608 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb07c8c86-70, col_values=(('external_ids', {'iface-id': 'adfe042f-67d6-4412-96b7-ec783ea52bcb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:43 np0005480824 ovn_controller[152667]: 2025-10-11T03:59:43Z|00206|binding|INFO|Releasing lport adfe042f-67d6-4412-96b7-ec783ea52bcb from this chassis (sb_readonly=0)
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.645 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.647 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[084af91a-2775-48e7-9ec3-4630a31ac4f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.648 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc.pid.haproxy
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:59:43 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:43.648 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc', 'env', 'PROCESS_TAG=haproxy-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.795 2 DEBUG nova.compute.manager [req-e30f0d00-9e8e-468e-abc7-d66ee62660e9 req-67d2afbd-d449-4d82-b423-0a350ce878f6 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Received event network-vif-plugged-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.796 2 DEBUG oslo_concurrency.lockutils [req-e30f0d00-9e8e-468e-abc7-d66ee62660e9 req-67d2afbd-d449-4d82-b423-0a350ce878f6 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.796 2 DEBUG oslo_concurrency.lockutils [req-e30f0d00-9e8e-468e-abc7-d66ee62660e9 req-67d2afbd-d449-4d82-b423-0a350ce878f6 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.797 2 DEBUG oslo_concurrency.lockutils [req-e30f0d00-9e8e-468e-abc7-d66ee62660e9 req-67d2afbd-d449-4d82-b423-0a350ce878f6 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.797 2 DEBUG nova.compute.manager [req-e30f0d00-9e8e-468e-abc7-d66ee62660e9 req-67d2afbd-d449-4d82-b423-0a350ce878f6 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Processing event network-vif-plugged-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.825 2 DEBUG nova.network.neutron [req-8689c9f7-18b0-4d5c-8b54-ffdf77d1d370 req-7b1983dd-6e45-4874-bd5e-fefec261e1df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Updated VIF entry in instance network info cache for port bfcdfd4b-fcfe-45df-af5d-b65bf0a23633. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.825 2 DEBUG nova.network.neutron [req-8689c9f7-18b0-4d5c-8b54-ffdf77d1d370 req-7b1983dd-6e45-4874-bd5e-fefec261e1df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Updating instance_info_cache with network_info: [{"id": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "address": "fa:16:3e:91:5e:e0", "network": {"id": "b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-707028039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dd4975fff494ac1b725d3dfb95c6006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfcdfd4b-fc", "ovs_interfaceid": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:59:43 np0005480824 nova_compute[260089]: 2025-10-11 03:59:43.838 2 DEBUG oslo_concurrency.lockutils [req-8689c9f7-18b0-4d5c-8b54-ffdf77d1d370 req-7b1983dd-6e45-4874-bd5e-fefec261e1df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:59:44 np0005480824 podman[294574]: 2025-10-11 03:59:44.017915148 +0000 UTC m=+0.058968937 container create c5d22a16e97784e2e9126b49ef61b89dd818f55a9ede383bfb904f350896e027 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:59:44 np0005480824 systemd[1]: Started libpod-conmon-c5d22a16e97784e2e9126b49ef61b89dd818f55a9ede383bfb904f350896e027.scope.
Oct 10 23:59:44 np0005480824 podman[294574]: 2025-10-11 03:59:43.981963912 +0000 UTC m=+0.023017731 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:59:44 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:59:44 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/344c9d4e496e165a49e75245dfeb9e0240db54c7b9950169735ac8a8deaf40a1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:44 np0005480824 podman[294574]: 2025-10-11 03:59:44.147021329 +0000 UTC m=+0.188075118 container init c5d22a16e97784e2e9126b49ef61b89dd818f55a9ede383bfb904f350896e027 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 23:59:44 np0005480824 podman[294574]: 2025-10-11 03:59:44.15967123 +0000 UTC m=+0.200724989 container start c5d22a16e97784e2e9126b49ef61b89dd818f55a9ede383bfb904f350896e027 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:59:44 np0005480824 neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc[294589]: [NOTICE]   (294593) : New worker (294595) forked
Oct 10 23:59:44 np0005480824 neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc[294589]: [NOTICE]   (294593) : Loading success.
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.323 2 DEBUG nova.compute.manager [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.326 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155184.3240862, d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.327 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] VM Started (Lifecycle Event)#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.333 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.340 2 INFO nova.virt.libvirt.driver [-] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Instance spawned successfully.#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.340 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.364 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.373 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.380 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.380 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.381 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.382 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.383 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.384 2 DEBUG nova.virt.libvirt.driver [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.395 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.396 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155184.3242643, d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.396 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.432 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.438 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155184.3297095, d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.438 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.458 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.461 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.466 2 INFO nova.compute.manager [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Took 6.84 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.466 2 DEBUG nova.compute.manager [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.480 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.527 2 INFO nova.compute.manager [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Took 7.78 seconds to build instance.#033[00m
Oct 10 23:59:44 np0005480824 nova_compute[260089]: 2025-10-11 03:59:44.545 2 DEBUG oslo_concurrency.lockutils [None req-60f259ae-579a-447e-90dd-32edd01f7062 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.6 MiB/s wr, 127 op/s
Oct 10 23:59:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:59:45 np0005480824 nova_compute[260089]: 2025-10-11 03:59:45.600 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Acquiring lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:45 np0005480824 nova_compute[260089]: 2025-10-11 03:59:45.601 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:45 np0005480824 nova_compute[260089]: 2025-10-11 03:59:45.624 2 DEBUG nova.compute.manager [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 23:59:45 np0005480824 nova_compute[260089]: 2025-10-11 03:59:45.868 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:45 np0005480824 nova_compute[260089]: 2025-10-11 03:59:45.871 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:45 np0005480824 nova_compute[260089]: 2025-10-11 03:59:45.880 2 DEBUG nova.compute.manager [req-8a1da93f-9fb5-47a8-90e1-ca60e0ac42dd req-040813e9-2da6-4f64-b3e6-dac65830a0e1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Received event network-vif-plugged-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:59:45 np0005480824 nova_compute[260089]: 2025-10-11 03:59:45.881 2 DEBUG oslo_concurrency.lockutils [req-8a1da93f-9fb5-47a8-90e1-ca60e0ac42dd req-040813e9-2da6-4f64-b3e6-dac65830a0e1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:45 np0005480824 nova_compute[260089]: 2025-10-11 03:59:45.882 2 DEBUG oslo_concurrency.lockutils [req-8a1da93f-9fb5-47a8-90e1-ca60e0ac42dd req-040813e9-2da6-4f64-b3e6-dac65830a0e1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:45 np0005480824 nova_compute[260089]: 2025-10-11 03:59:45.883 2 DEBUG oslo_concurrency.lockutils [req-8a1da93f-9fb5-47a8-90e1-ca60e0ac42dd req-040813e9-2da6-4f64-b3e6-dac65830a0e1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:45 np0005480824 nova_compute[260089]: 2025-10-11 03:59:45.883 2 DEBUG nova.compute.manager [req-8a1da93f-9fb5-47a8-90e1-ca60e0ac42dd req-040813e9-2da6-4f64-b3e6-dac65830a0e1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] No waiting events found dispatching network-vif-plugged-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:59:45 np0005480824 nova_compute[260089]: 2025-10-11 03:59:45.884 2 WARNING nova.compute.manager [req-8a1da93f-9fb5-47a8-90e1-ca60e0ac42dd req-040813e9-2da6-4f64-b3e6-dac65830a0e1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Received unexpected event network-vif-plugged-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:59:45 np0005480824 nova_compute[260089]: 2025-10-11 03:59:45.899 2 DEBUG nova.virt.hardware [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 23:59:45 np0005480824 nova_compute[260089]: 2025-10-11 03:59:45.900 2 INFO nova.compute.claims [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 10 23:59:45 np0005480824 nova_compute[260089]: 2025-10-11 03:59:45.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.122 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 10 23:59:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/931274833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.550 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.559 2 DEBUG nova.compute.provider_tree [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.577 2 DEBUG nova.scheduler.client.report [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.599 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.600 2 DEBUG nova.compute.manager [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.651 2 DEBUG nova.compute.manager [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.656 2 DEBUG nova.network.neutron [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:46 np0005480824 NetworkManager[44969]: <info>  [1760155186.6615] manager: (patch-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Oct 10 23:59:46 np0005480824 NetworkManager[44969]: <info>  [1760155186.6631] manager: (patch-br-int-to-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.672 2 INFO nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.707 2 DEBUG nova.compute.manager [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.755 2 INFO nova.virt.block_device [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Booting with volume f6e39357-0e1b-4c7b-9343-f0d5e0741f06 at /dev/vdb#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.878 2 DEBUG os_brick.utils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.881 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1616: 321 pgs: 321 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.3 MiB/s wr, 109 op/s
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.897 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.898 676 DEBUG oslo.privsep.daemon [-] privsep: reply[33f5bcea-8312-473b-a357-379eec94d7ef]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.900 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.909 2 DEBUG nova.policy [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '04ab08efaee14de7b56b2514c0187402', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7e504d8715354886aaae057de71d2d5e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.920 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.920 676 DEBUG oslo.privsep.daemon [-] privsep: reply[22e96505-84d1-4d29-8a35-318f66ad56a8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.922 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.932 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.933 676 DEBUG oslo.privsep.daemon [-] privsep: reply[e0296fae-7a15-4a0a-b438-285a3c50b950]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.935 676 DEBUG oslo.privsep.daemon [-] privsep: reply[415754a1-42dd-4884-b9b2-ae27d45205b0]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.936 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.964 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.966 2 DEBUG os_brick.initiator.connectors.lightos [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.967 2 DEBUG os_brick.initiator.connectors.lightos [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.967 2 DEBUG os_brick.initiator.connectors.lightos [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.967 2 DEBUG os_brick.utils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] <== get_connector_properties: return (88ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.968 2 DEBUG nova.virt.block_device [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Updating existing volume attachment record: 21de4f61-2f54-4ca4-922a-dd239bfe1096 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:46 np0005480824 ovn_controller[152667]: 2025-10-11T03:59:46Z|00207|binding|INFO|Releasing lport adfe042f-67d6-4412-96b7-ec783ea52bcb from this chassis (sb_readonly=0)
Oct 10 23:59:46 np0005480824 nova_compute[260089]: 2025-10-11 03:59:46.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:59:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2117003142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:59:47 np0005480824 nova_compute[260089]: 2025-10-11 03:59:47.741 2 DEBUG nova.network.neutron [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Successfully created port: 6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 23:59:47 np0005480824 nova_compute[260089]: 2025-10-11 03:59:47.950 2 DEBUG nova.compute.manager [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 23:59:47 np0005480824 nova_compute[260089]: 2025-10-11 03:59:47.953 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 23:59:47 np0005480824 nova_compute[260089]: 2025-10-11 03:59:47.954 2 INFO nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Creating image(s)#033[00m
Oct 10 23:59:47 np0005480824 nova_compute[260089]: 2025-10-11 03:59:47.986 2 DEBUG nova.storage.rbd_utils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] rbd image 9d89b9fc-eda1-4801-8670-e3e48a9e04ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.019 2 DEBUG nova.storage.rbd_utils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] rbd image 9d89b9fc-eda1-4801-8670-e3e48a9e04ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.054 2 DEBUG nova.storage.rbd_utils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] rbd image 9d89b9fc-eda1-4801-8670-e3e48a9e04ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.060 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.104 2 DEBUG nova.compute.manager [req-912f5fa0-3eae-4ae6-8409-f9125cff6387 req-bc9a3f78-285d-46b0-bf47-32891254b160 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Received event network-changed-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.105 2 DEBUG nova.compute.manager [req-912f5fa0-3eae-4ae6-8409-f9125cff6387 req-bc9a3f78-285d-46b0-bf47-32891254b160 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Refreshing instance network info cache due to event network-changed-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.106 2 DEBUG oslo_concurrency.lockutils [req-912f5fa0-3eae-4ae6-8409-f9125cff6387 req-bc9a3f78-285d-46b0-bf47-32891254b160 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.106 2 DEBUG oslo_concurrency.lockutils [req-912f5fa0-3eae-4ae6-8409-f9125cff6387 req-bc9a3f78-285d-46b0-bf47-32891254b160 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.107 2 DEBUG nova.network.neutron [req-912f5fa0-3eae-4ae6-8409-f9125cff6387 req-bc9a3f78-285d-46b0-bf47-32891254b160 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Refreshing network info cache for port bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.158 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.160 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.161 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.162 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.196 2 DEBUG nova.storage.rbd_utils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] rbd image 9d89b9fc-eda1-4801-8670-e3e48a9e04ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.202 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 9d89b9fc-eda1-4801-8670-e3e48a9e04ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.544 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 9d89b9fc-eda1-4801-8670-e3e48a9e04ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.611 2 DEBUG nova.storage.rbd_utils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] resizing rbd image 9d89b9fc-eda1-4801-8670-e3e48a9e04ae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.722 2 DEBUG nova.network.neutron [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Successfully updated port: 6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.732 2 DEBUG nova.objects.instance [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lazy-loading 'migration_context' on Instance uuid 9d89b9fc-eda1-4801-8670-e3e48a9e04ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.737 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Acquiring lock "refresh_cache-9d89b9fc-eda1-4801-8670-e3e48a9e04ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.737 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Acquired lock "refresh_cache-9d89b9fc-eda1-4801-8670-e3e48a9e04ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.738 2 DEBUG nova.network.neutron [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.745 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.746 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Ensure instance console log exists: /var/lib/nova/instances/9d89b9fc-eda1-4801-8670-e3e48a9e04ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.746 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.747 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.747 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.782 2 DEBUG nova.compute.manager [req-acdb6cdc-95bd-4ee8-b2c6-3b48623eb5e2 req-801b5174-aec2-441c-8183-8c560639c9bd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Received event network-changed-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.783 2 DEBUG nova.compute.manager [req-acdb6cdc-95bd-4ee8-b2c6-3b48623eb5e2 req-801b5174-aec2-441c-8183-8c560639c9bd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Refreshing instance network info cache due to event network-changed-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.783 2 DEBUG oslo_concurrency.lockutils [req-acdb6cdc-95bd-4ee8-b2c6-3b48623eb5e2 req-801b5174-aec2-441c-8183-8c560639c9bd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-9d89b9fc-eda1-4801-8670-e3e48a9e04ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:59:48 np0005480824 nova_compute[260089]: 2025-10-11 03:59:48.866 2 DEBUG nova.network.neutron [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 23:59:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 123 op/s
Oct 10 23:59:49 np0005480824 nova_compute[260089]: 2025-10-11 03:59:49.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:49 np0005480824 nova_compute[260089]: 2025-10-11 03:59:49.659 2 DEBUG nova.network.neutron [req-912f5fa0-3eae-4ae6-8409-f9125cff6387 req-bc9a3f78-285d-46b0-bf47-32891254b160 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Updated VIF entry in instance network info cache for port bfcdfd4b-fcfe-45df-af5d-b65bf0a23633. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:59:49 np0005480824 nova_compute[260089]: 2025-10-11 03:59:49.660 2 DEBUG nova.network.neutron [req-912f5fa0-3eae-4ae6-8409-f9125cff6387 req-bc9a3f78-285d-46b0-bf47-32891254b160 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Updating instance_info_cache with network_info: [{"id": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "address": "fa:16:3e:91:5e:e0", "network": {"id": "b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-707028039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dd4975fff494ac1b725d3dfb95c6006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfcdfd4b-fc", "ovs_interfaceid": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:59:49 np0005480824 nova_compute[260089]: 2025-10-11 03:59:49.680 2 DEBUG oslo_concurrency.lockutils [req-912f5fa0-3eae-4ae6-8409-f9125cff6387 req-bc9a3f78-285d-46b0-bf47-32891254b160 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.018 2 DEBUG nova.network.neutron [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Updating instance_info_cache with network_info: [{"id": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "address": "fa:16:3e:aa:03:d2", "network": {"id": "dfca432f-447a-432a-acc4-3a23e93eb8d6", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1081477901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e504d8715354886aaae057de71d2d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86c387-3e", "ovs_interfaceid": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.046 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Releasing lock "refresh_cache-9d89b9fc-eda1-4801-8670-e3e48a9e04ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.047 2 DEBUG nova.compute.manager [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Instance network_info: |[{"id": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "address": "fa:16:3e:aa:03:d2", "network": {"id": "dfca432f-447a-432a-acc4-3a23e93eb8d6", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1081477901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e504d8715354886aaae057de71d2d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86c387-3e", "ovs_interfaceid": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 23:59:50 np0005480824 podman[294800]: 2025-10-11 03:59:50.050408821 +0000 UTC m=+0.096702235 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.050 2 DEBUG oslo_concurrency.lockutils [req-acdb6cdc-95bd-4ee8-b2c6-3b48623eb5e2 req-801b5174-aec2-441c-8183-8c560639c9bd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-9d89b9fc-eda1-4801-8670-e3e48a9e04ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.054 2 DEBUG nova.network.neutron [req-acdb6cdc-95bd-4ee8-b2c6-3b48623eb5e2 req-801b5174-aec2-441c-8183-8c560639c9bd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Refreshing network info cache for port 6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.060 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Start _get_guest_xml network_info=[{"id": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "address": "fa:16:3e:aa:03:d2", "network": {"id": "dfca432f-447a-432a-acc4-3a23e93eb8d6", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1081477901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e504d8715354886aaae057de71d2d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86c387-3e", "ovs_interfaceid": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '21de4f61-2f54-4ca4-922a-dd239bfe1096', 'mount_device': '/dev/vdb', 'delete_on_termination': False, 'guest_format': None, 'boot_index': -1, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f6e39357-0e1b-4c7b-9343-f0d5e0741f06', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f6e39357-0e1b-4c7b-9343-f0d5e0741f06', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9d89b9fc-eda1-4801-8670-e3e48a9e04ae', 'attached_at': '', 'detached_at': '', 'volume_id': 'f6e39357-0e1b-4c7b-9343-f0d5e0741f06', 'serial': 'f6e39357-0e1b-4c7b-9343-f0d5e0741f06'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.070 2 WARNING nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.081 2 DEBUG nova.virt.libvirt.host [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.082 2 DEBUG nova.virt.libvirt.host [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.085 2 DEBUG nova.virt.libvirt.host [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.086 2 DEBUG nova.virt.libvirt.host [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.087 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.088 2 DEBUG nova.virt.hardware [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.089 2 DEBUG nova.virt.hardware [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.089 2 DEBUG nova.virt.hardware [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.090 2 DEBUG nova.virt.hardware [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.091 2 DEBUG nova.virt.hardware [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.091 2 DEBUG nova.virt.hardware [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.092 2 DEBUG nova.virt.hardware [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.092 2 DEBUG nova.virt.hardware [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.093 2 DEBUG nova.virt.hardware [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.093 2 DEBUG nova.virt.hardware [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.094 2 DEBUG nova.virt.hardware [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.099 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:59:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1206836683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:59:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.593 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.633 2 DEBUG nova.storage.rbd_utils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] rbd image 9d89b9fc-eda1-4801-8670-e3e48a9e04ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.641 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 112 op/s
Oct 10 23:59:50 np0005480824 nova_compute[260089]: 2025-10-11 03:59:50.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 10 23:59:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/527847360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.070 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.133 2 DEBUG nova.virt.libvirt.vif [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:59:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-1498768136',display_name='tempest-instance-1498768136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1498768136',id=23,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6NEZ2tGoJNT+vvNnpP4L6gc6uAsBt40LTA8EQpPfSsAFWsYjpXOMQiWw7U5ChT+0BqjZWxp2ku4qdtk+iV8mf7DOgmUJEHoiCZuHPxkdkmWxuyoiARuOt4ilG0l2yHrA==',key_name='tempest-keypair-1444996657',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7e504d8715354886aaae057de71d2d5e',ramdisk_id='',reservation_id='r-jwaks7td',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-781394803',owner_user_name='tempest-VolumesBackupsTest-781394803-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:59:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='04ab08efaee14de7b56b2514c0187402',uuid=9d89b9fc-eda1-4801-8670-e3e48a9e04ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "address": "fa:16:3e:aa:03:d2", "network": {"id": "dfca432f-447a-432a-acc4-3a23e93eb8d6", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1081477901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e504d8715354886aaae057de71d2d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86c387-3e", "ovs_interfaceid": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.134 2 DEBUG nova.network.os_vif_util [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Converting VIF {"id": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "address": "fa:16:3e:aa:03:d2", "network": {"id": "dfca432f-447a-432a-acc4-3a23e93eb8d6", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1081477901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e504d8715354886aaae057de71d2d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86c387-3e", "ovs_interfaceid": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.135 2 DEBUG nova.network.os_vif_util [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=6b86c387-3e59-4e3b-a7e3-e1ddfc541c50,network=Network(dfca432f-447a-432a-acc4-3a23e93eb8d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86c387-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.137 2 DEBUG nova.objects.instance [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d89b9fc-eda1-4801-8670-e3e48a9e04ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.151 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] End _get_guest_xml xml=<domain type="kvm">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  <uuid>9d89b9fc-eda1-4801-8670-e3e48a9e04ae</uuid>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  <name>instance-00000017</name>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  <metadata>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <nova:name>tempest-instance-1498768136</nova:name>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 03:59:50</nova:creationTime>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        <nova:user uuid="04ab08efaee14de7b56b2514c0187402">tempest-VolumesBackupsTest-781394803-project-member</nova:user>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        <nova:project uuid="7e504d8715354886aaae057de71d2d5e">tempest-VolumesBackupsTest-781394803</nova:project>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        <nova:port uuid="6b86c387-3e59-4e3b-a7e3-e1ddfc541c50">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        </nova:port>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  </metadata>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <system>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <entry name="serial">9d89b9fc-eda1-4801-8670-e3e48a9e04ae</entry>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <entry name="uuid">9d89b9fc-eda1-4801-8670-e3e48a9e04ae</entry>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    </system>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  <os>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  </os>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  <features>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <acpi/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <apic/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  </features>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  </clock>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  </cpu>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  <devices>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/9d89b9fc-eda1-4801-8670-e3e48a9e04ae_disk">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/9d89b9fc-eda1-4801-8670-e3e48a9e04ae_disk.config">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-f6e39357-0e1b-4c7b-9343-f0d5e0741f06">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      </source>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      </auth>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <target dev="vdb" bus="virtio"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <serial>f6e39357-0e1b-4c7b-9343-f0d5e0741f06</serial>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    </disk>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:aa:03:d2"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <target dev="tap6b86c387-3e"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    </interface>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/9d89b9fc-eda1-4801-8670-e3e48a9e04ae/console.log" append="off"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    </serial>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <video>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    </video>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    </rng>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 10 23:59:51 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:    </memballoon>
Oct 10 23:59:51 np0005480824 nova_compute[260089]:  </devices>
Oct 10 23:59:51 np0005480824 nova_compute[260089]: </domain>
Oct 10 23:59:51 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.164 2 DEBUG nova.compute.manager [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Preparing to wait for external event network-vif-plugged-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.164 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Acquiring lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.165 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.165 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.166 2 DEBUG nova.virt.libvirt.vif [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:59:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-1498768136',display_name='tempest-instance-1498768136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1498768136',id=23,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6NEZ2tGoJNT+vvNnpP4L6gc6uAsBt40LTA8EQpPfSsAFWsYjpXOMQiWw7U5ChT+0BqjZWxp2ku4qdtk+iV8mf7DOgmUJEHoiCZuHPxkdkmWxuyoiARuOt4ilG0l2yHrA==',key_name='tempest-keypair-1444996657',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7e504d8715354886aaae057de71d2d5e',ramdisk_id='',reservation_id='r-jwaks7td',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-781394803',owner_user_name='tempest-VolumesBackupsTest-781394803-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T03:59:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='04ab08efaee14de7b56b2514c0187402',uuid=9d89b9fc-eda1-4801-8670-e3e48a9e04ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "address": "fa:16:3e:aa:03:d2", "network": {"id": "dfca432f-447a-432a-acc4-3a23e93eb8d6", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1081477901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e504d8715354886aaae057de71d2d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86c387-3e", "ovs_interfaceid": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.167 2 DEBUG nova.network.os_vif_util [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Converting VIF {"id": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "address": "fa:16:3e:aa:03:d2", "network": {"id": "dfca432f-447a-432a-acc4-3a23e93eb8d6", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1081477901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e504d8715354886aaae057de71d2d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86c387-3e", "ovs_interfaceid": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.167 2 DEBUG nova.network.os_vif_util [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=6b86c387-3e59-4e3b-a7e3-e1ddfc541c50,network=Network(dfca432f-447a-432a-acc4-3a23e93eb8d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86c387-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.168 2 DEBUG os_vif [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=6b86c387-3e59-4e3b-a7e3-e1ddfc541c50,network=Network(dfca432f-447a-432a-acc4-3a23e93eb8d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86c387-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.170 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.177 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b86c387-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.178 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b86c387-3e, col_values=(('external_ids', {'iface-id': '6b86c387-3e59-4e3b-a7e3-e1ddfc541c50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:03:d2', 'vm-uuid': '9d89b9fc-eda1-4801-8670-e3e48a9e04ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:51 np0005480824 NetworkManager[44969]: <info>  [1760155191.1823] manager: (tap6b86c387-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.190 2 INFO os_vif [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=6b86c387-3e59-4e3b-a7e3-e1ddfc541c50,network=Network(dfca432f-447a-432a-acc4-3a23e93eb8d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86c387-3e')#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.271 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.272 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.273 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.273 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] No VIF found with MAC fa:16:3e:aa:03:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.275 2 INFO nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Using config drive#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.308 2 DEBUG nova.storage.rbd_utils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] rbd image 9d89b9fc-eda1-4801-8670-e3e48a9e04ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.895 2 INFO nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Creating config drive at /var/lib/nova/instances/9d89b9fc-eda1-4801-8670-e3e48a9e04ae/disk.config#033[00m
Oct 10 23:59:51 np0005480824 nova_compute[260089]: 2025-10-11 03:59:51.910 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d89b9fc-eda1-4801-8670-e3e48a9e04ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwadzz5wn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.007 2 DEBUG nova.network.neutron [req-acdb6cdc-95bd-4ee8-b2c6-3b48623eb5e2 req-801b5174-aec2-441c-8183-8c560639c9bd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Updated VIF entry in instance network info cache for port 6b86c387-3e59-4e3b-a7e3-e1ddfc541c50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.008 2 DEBUG nova.network.neutron [req-acdb6cdc-95bd-4ee8-b2c6-3b48623eb5e2 req-801b5174-aec2-441c-8183-8c560639c9bd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Updating instance_info_cache with network_info: [{"id": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "address": "fa:16:3e:aa:03:d2", "network": {"id": "dfca432f-447a-432a-acc4-3a23e93eb8d6", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1081477901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e504d8715354886aaae057de71d2d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86c387-3e", "ovs_interfaceid": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.025 2 DEBUG oslo_concurrency.lockutils [req-acdb6cdc-95bd-4ee8-b2c6-3b48623eb5e2 req-801b5174-aec2-441c-8183-8c560639c9bd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-9d89b9fc-eda1-4801-8670-e3e48a9e04ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.068 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d89b9fc-eda1-4801-8670-e3e48a9e04ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwadzz5wn" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.113 2 DEBUG nova.storage.rbd_utils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] rbd image 9d89b9fc-eda1-4801-8670-e3e48a9e04ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.122 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9d89b9fc-eda1-4801-8670-e3e48a9e04ae/disk.config 9d89b9fc-eda1-4801-8670-e3e48a9e04ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.312 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.312 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.313 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.317 2 DEBUG oslo_concurrency.processutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9d89b9fc-eda1-4801-8670-e3e48a9e04ae/disk.config 9d89b9fc-eda1-4801-8670-e3e48a9e04ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.317 2 INFO nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Deleting local config drive /var/lib/nova/instances/9d89b9fc-eda1-4801-8670-e3e48a9e04ae/disk.config because it was imported into RBD.#033[00m
Oct 10 23:59:52 np0005480824 kernel: tap6b86c387-3e: entered promiscuous mode
Oct 10 23:59:52 np0005480824 NetworkManager[44969]: <info>  [1760155192.3875] manager: (tap6b86c387-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/116)
Oct 10 23:59:52 np0005480824 ovn_controller[152667]: 2025-10-11T03:59:52Z|00208|binding|INFO|Claiming lport 6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 for this chassis.
Oct 10 23:59:52 np0005480824 ovn_controller[152667]: 2025-10-11T03:59:52Z|00209|binding|INFO|6b86c387-3e59-4e3b-a7e3-e1ddfc541c50: Claiming fa:16:3e:aa:03:d2 10.100.0.12
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.399 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:03:d2 10.100.0.12'], port_security=['fa:16:3e:aa:03:d2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9d89b9fc-eda1-4801-8670-e3e48a9e04ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfca432f-447a-432a-acc4-3a23e93eb8d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7e504d8715354886aaae057de71d2d5e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1d6427b-83e7-4165-87fb-9e4a4a454ad5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf22a879-98d1-4d61-afc5-85ac70ccc880, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=6b86c387-3e59-4e3b-a7e3-e1ddfc541c50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.401 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 in datapath dfca432f-447a-432a-acc4-3a23e93eb8d6 bound to our chassis#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.404 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dfca432f-447a-432a-acc4-3a23e93eb8d6#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.424 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[38768854-042d-4bcd-be57-1f0449065ee4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.425 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdfca432f-41 in ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.429 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdfca432f-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.429 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8142c087-00e4-419e-a57a-189fb7720ea3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.431 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[19d44e62-cede-459f-a3c9-4dd825a6393b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 systemd-machined[215071]: New machine qemu-23-instance-00000017.
Oct 10 23:59:52 np0005480824 ovn_controller[152667]: 2025-10-11T03:59:52Z|00210|binding|INFO|Setting lport 6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 ovn-installed in OVS
Oct 10 23:59:52 np0005480824 ovn_controller[152667]: 2025-10-11T03:59:52Z|00211|binding|INFO|Setting lport 6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 up in Southbound
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.446 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f52a95-564b-4885-a7e6-50896a1f8399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 systemd[1]: Started Virtual Machine qemu-23-instance-00000017.
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.476 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1f3338-ed9f-4347-b4f1-9e18794f8d2f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 systemd-udevd[295059]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 23:59:52 np0005480824 NetworkManager[44969]: <info>  [1760155192.5114] device (tap6b86c387-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 23:59:52 np0005480824 NetworkManager[44969]: <info>  [1760155192.5128] device (tap6b86c387-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.530 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[10e68b34-5984-40d9-9bde-7630ef6c0cba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.539 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2525628e-96d4-49e8-ab5a-fe9b8bea7560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 NetworkManager[44969]: <info>  [1760155192.5455] manager: (tapdfca432f-40): new Veth device (/org/freedesktop/NetworkManager/Devices/117)
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.602 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3f2c70-e956-4a2f-82c0-1955e46e9f4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.606 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[0a49670e-3d06-4c4c-be23-0d50a301c357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 NetworkManager[44969]: <info>  [1760155192.6370] device (tapdfca432f-40): carrier: link connected
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.646 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[1531a8a8-2b99-48d8-aed2-2e5cff629e4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.668 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[be22a11e-99e0-4d52-a7d8-853ccc13ef02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfca432f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:16:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466029, 'reachable_time': 37685, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295096, 'error': None, 'target': 'ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.702 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[dc127f21-6ebf-4880-81e1-34b0ce9b46a9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:16e3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466029, 'tstamp': 466029}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295101, 'error': None, 'target': 'ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.728 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[78461b19-10c7-4d11-88c4-29e2e3adb382]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfca432f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:16:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466029, 'reachable_time': 37685, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295104, 'error': None, 'target': 'ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.781 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[354f23d8-cb70-4421-9006-24729e654d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.799 2 DEBUG nova.compute.manager [req-35549ad8-2198-4f63-858a-32cf14cdbe80 req-91bf5d0d-01e6-4644-b5ee-d39bb851ce69 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Received event network-vif-plugged-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.800 2 DEBUG oslo_concurrency.lockutils [req-35549ad8-2198-4f63-858a-32cf14cdbe80 req-91bf5d0d-01e6-4644-b5ee-d39bb851ce69 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.801 2 DEBUG oslo_concurrency.lockutils [req-35549ad8-2198-4f63-858a-32cf14cdbe80 req-91bf5d0d-01e6-4644-b5ee-d39bb851ce69 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.801 2 DEBUG oslo_concurrency.lockutils [req-35549ad8-2198-4f63-858a-32cf14cdbe80 req-91bf5d0d-01e6-4644-b5ee-d39bb851ce69 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.802 2 DEBUG nova.compute.manager [req-35549ad8-2198-4f63-858a-32cf14cdbe80 req-91bf5d0d-01e6-4644-b5ee-d39bb851ce69 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Processing event network-vif-plugged-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.883 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[62e755a2-ddf4-42e8-a066-e0ef66a7ed88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.885 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfca432f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.886 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.886 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfca432f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:59:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.4 MiB/s wr, 143 op/s
Oct 10 23:59:52 np0005480824 NetworkManager[44969]: <info>  [1760155192.9265] manager: (tapdfca432f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Oct 10 23:59:52 np0005480824 kernel: tapdfca432f-40: entered promiscuous mode
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.930 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdfca432f-40, col_values=(('external_ids', {'iface-id': '9d9f0fc8-effb-48c1-a575-69d14d6b75f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 23:59:52 np0005480824 ovn_controller[152667]: 2025-10-11T03:59:52Z|00212|binding|INFO|Releasing lport 9d9f0fc8-effb-48c1-a575-69d14d6b75f4 from this chassis (sb_readonly=0)
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.934 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dfca432f-447a-432a-acc4-3a23e93eb8d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dfca432f-447a-432a-acc4-3a23e93eb8d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.936 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[f0050841-cd97-448a-97b9-79599a3ea569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.938 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: global
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-dfca432f-447a-432a-acc4-3a23e93eb8d6
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/dfca432f-447a-432a-acc4-3a23e93eb8d6.pid.haproxy
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID dfca432f-447a-432a-acc4-3a23e93eb8d6
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 23:59:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:52.939 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6', 'env', 'PROCESS_TAG=haproxy-dfca432f-447a-432a-acc4-3a23e93eb8d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dfca432f-447a-432a-acc4-3a23e93eb8d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 23:59:52 np0005480824 nova_compute[260089]: 2025-10-11 03:59:52.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:59:53 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev fa90b8fd-7d45-4fec-92d3-9ea7aafe78da does not exist
Oct 10 23:59:53 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 5a0fdd74-d52b-4e25-91ca-99c44ddce189 does not exist
Oct 10 23:59:53 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev a3718894-5adc-4743-b09b-8204368f8d3b does not exist
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 10 23:59:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 23:59:53 np0005480824 podman[295239]: 2025-10-11 03:59:53.389903389 +0000 UTC m=+0.072134930 container create c29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2)
Oct 10 23:59:53 np0005480824 systemd[1]: Started libpod-conmon-c29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550.scope.
Oct 10 23:59:53 np0005480824 podman[295239]: 2025-10-11 03:59:53.358331223 +0000 UTC m=+0.040562784 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 10 23:59:53 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:59:53 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6b7bab3771db020ac393d0f6bdc1d1e7d098ee22406c55794cc2cff7dcfc19/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:53 np0005480824 podman[295239]: 2025-10-11 03:59:53.501443375 +0000 UTC m=+0.183674946 container init c29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 10 23:59:53 np0005480824 podman[295239]: 2025-10-11 03:59:53.510964984 +0000 UTC m=+0.193196525 container start c29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 10 23:59:53 np0005480824 neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6[295326]: [NOTICE]   (295331) : New worker (295333) forked
Oct 10 23:59:53 np0005480824 neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6[295326]: [NOTICE]   (295331) : Loading success.
Oct 10 23:59:53 np0005480824 podman[295380]: 2025-10-11 03:59:53.831800855 +0000 UTC m=+0.056598523 container create 25f69347d7642a270dcb1ac1918d036a96f71d178d411318b1a3f6395f3170d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hermann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 23:59:53 np0005480824 systemd[1]: Started libpod-conmon-25f69347d7642a270dcb1ac1918d036a96f71d178d411318b1a3f6395f3170d9.scope.
Oct 10 23:59:53 np0005480824 podman[295380]: 2025-10-11 03:59:53.813430583 +0000 UTC m=+0.038228271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:59:53 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:59:53 np0005480824 podman[295380]: 2025-10-11 03:59:53.931750294 +0000 UTC m=+0.156547972 container init 25f69347d7642a270dcb1ac1918d036a96f71d178d411318b1a3f6395f3170d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hermann, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 10 23:59:53 np0005480824 podman[295380]: 2025-10-11 03:59:53.939502393 +0000 UTC m=+0.164300061 container start 25f69347d7642a270dcb1ac1918d036a96f71d178d411318b1a3f6395f3170d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hermann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:59:53 np0005480824 podman[295380]: 2025-10-11 03:59:53.942561613 +0000 UTC m=+0.167359281 container attach 25f69347d7642a270dcb1ac1918d036a96f71d178d411318b1a3f6395f3170d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hermann, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:59:53 np0005480824 silly_hermann[295397]: 167 167
Oct 10 23:59:53 np0005480824 systemd[1]: libpod-25f69347d7642a270dcb1ac1918d036a96f71d178d411318b1a3f6395f3170d9.scope: Deactivated successfully.
Oct 10 23:59:53 np0005480824 conmon[295397]: conmon 25f69347d7642a270dcb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25f69347d7642a270dcb1ac1918d036a96f71d178d411318b1a3f6395f3170d9.scope/container/memory.events
Oct 10 23:59:53 np0005480824 podman[295380]: 2025-10-11 03:59:53.952455951 +0000 UTC m=+0.177253679 container died 25f69347d7642a270dcb1ac1918d036a96f71d178d411318b1a3f6395f3170d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 23:59:53 np0005480824 nova_compute[260089]: 2025-10-11 03:59:53.977 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155193.97637, 9d89b9fc-eda1-4801-8670-e3e48a9e04ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:59:53 np0005480824 nova_compute[260089]: 2025-10-11 03:59:53.977 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] VM Started (Lifecycle Event)#033[00m
Oct 10 23:59:53 np0005480824 systemd[1]: var-lib-containers-storage-overlay-90103f02beb14e3c04a1ceb840694c56f6965ed289593c8b0dae5a658683d4b9-merged.mount: Deactivated successfully.
Oct 10 23:59:53 np0005480824 nova_compute[260089]: 2025-10-11 03:59:53.980 2 DEBUG nova.compute.manager [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 23:59:53 np0005480824 nova_compute[260089]: 2025-10-11 03:59:53.986 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 23:59:53 np0005480824 nova_compute[260089]: 2025-10-11 03:59:53.993 2 INFO nova.virt.libvirt.driver [-] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Instance spawned successfully.#033[00m
Oct 10 23:59:53 np0005480824 nova_compute[260089]: 2025-10-11 03:59:53.995 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 23:59:53 np0005480824 podman[295380]: 2025-10-11 03:59:53.99674451 +0000 UTC m=+0.221542178 container remove 25f69347d7642a270dcb1ac1918d036a96f71d178d411318b1a3f6395f3170d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hermann, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:59:53 np0005480824 nova_compute[260089]: 2025-10-11 03:59:53.997 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.003 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:59:54 np0005480824 systemd[1]: libpod-conmon-25f69347d7642a270dcb1ac1918d036a96f71d178d411318b1a3f6395f3170d9.scope: Deactivated successfully.
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.021 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.023 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155193.9808955, 9d89b9fc-eda1-4801-8670-e3e48a9e04ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.023 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] VM Paused (Lifecycle Event)#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.048 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.059 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.060 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.060 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.061 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.062 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.063 2 DEBUG nova.virt.libvirt.driver [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.070 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155193.9856517, 9d89b9fc-eda1-4801-8670-e3e48a9e04ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.070 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] VM Resumed (Lifecycle Event)#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.098 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.115 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.125 2 INFO nova.compute.manager [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Took 6.17 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.125 2 DEBUG nova.compute.manager [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.143 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.196 2 INFO nova.compute.manager [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Took 8.52 seconds to build instance.#033[00m
Oct 10 23:59:54 np0005480824 podman[295421]: 2025-10-11 03:59:54.200700342 +0000 UTC m=+0.053406790 container create d4345ff4317f0592e34ce063e13e47b0ac62037937c4ace835bdb4feaffcd64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shirley, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.212 2 DEBUG oslo_concurrency.lockutils [None req-61b4cbff-144a-42b3-81c1-a117482b0808 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:54 np0005480824 systemd[1]: Started libpod-conmon-d4345ff4317f0592e34ce063e13e47b0ac62037937c4ace835bdb4feaffcd64e.scope.
Oct 10 23:59:54 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:59:54 np0005480824 podman[295421]: 2025-10-11 03:59:54.176181598 +0000 UTC m=+0.028888076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:59:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a4f86470795bcb905e88ead30462a57257d36593aa8567bc7eac05a26bff01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a4f86470795bcb905e88ead30462a57257d36593aa8567bc7eac05a26bff01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a4f86470795bcb905e88ead30462a57257d36593aa8567bc7eac05a26bff01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a4f86470795bcb905e88ead30462a57257d36593aa8567bc7eac05a26bff01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a4f86470795bcb905e88ead30462a57257d36593aa8567bc7eac05a26bff01/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:54 np0005480824 podman[295421]: 2025-10-11 03:59:54.326538597 +0000 UTC m=+0.179245065 container init d4345ff4317f0592e34ce063e13e47b0ac62037937c4ace835bdb4feaffcd64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shirley, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:59:54 np0005480824 podman[295421]: 2025-10-11 03:59:54.34143919 +0000 UTC m=+0.194145638 container start d4345ff4317f0592e34ce063e13e47b0ac62037937c4ace835bdb4feaffcd64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shirley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 10 23:59:54 np0005480824 podman[295421]: 2025-10-11 03:59:54.345057883 +0000 UTC m=+0.197764351 container attach d4345ff4317f0592e34ce063e13e47b0ac62037937c4ace835bdb4feaffcd64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shirley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.899 2 DEBUG nova.compute.manager [req-c23e03b7-43cb-494e-aaf9-2b0aa676057c req-3f5090fb-58a5-4a26-9747-7a6a28677a73 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Received event network-vif-plugged-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.899 2 DEBUG oslo_concurrency.lockutils [req-c23e03b7-43cb-494e-aaf9-2b0aa676057c req-3f5090fb-58a5-4a26-9747-7a6a28677a73 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.899 2 DEBUG oslo_concurrency.lockutils [req-c23e03b7-43cb-494e-aaf9-2b0aa676057c req-3f5090fb-58a5-4a26-9747-7a6a28677a73 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.899 2 DEBUG oslo_concurrency.lockutils [req-c23e03b7-43cb-494e-aaf9-2b0aa676057c req-3f5090fb-58a5-4a26-9747-7a6a28677a73 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.899 2 DEBUG nova.compute.manager [req-c23e03b7-43cb-494e-aaf9-2b0aa676057c req-3f5090fb-58a5-4a26-9747-7a6a28677a73 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] No waiting events found dispatching network-vif-plugged-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 23:59:54 np0005480824 nova_compute[260089]: 2025-10-11 03:59:54.900 2 WARNING nova.compute.manager [req-c23e03b7-43cb-494e-aaf9-2b0aa676057c req-3f5090fb-58a5-4a26-9747-7a6a28677a73 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Received unexpected event network-vif-plugged-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 for instance with vm_state active and task_state None.#033[00m
Oct 10 23:59:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Oct 10 23:59:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 23:59:55 np0005480824 youthful_shirley[295438]: --> passed data devices: 0 physical, 3 LVM
Oct 10 23:59:55 np0005480824 youthful_shirley[295438]: --> relative data size: 1.0
Oct 10 23:59:55 np0005480824 youthful_shirley[295438]: --> All data devices are unavailable
Oct 10 23:59:55 np0005480824 systemd[1]: libpod-d4345ff4317f0592e34ce063e13e47b0ac62037937c4ace835bdb4feaffcd64e.scope: Deactivated successfully.
Oct 10 23:59:55 np0005480824 systemd[1]: libpod-d4345ff4317f0592e34ce063e13e47b0ac62037937c4ace835bdb4feaffcd64e.scope: Consumed 1.221s CPU time.
Oct 10 23:59:55 np0005480824 conmon[295438]: conmon d4345ff4317f0592e34c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d4345ff4317f0592e34ce063e13e47b0ac62037937c4ace835bdb4feaffcd64e.scope/container/memory.events
Oct 10 23:59:55 np0005480824 podman[295421]: 2025-10-11 03:59:55.684094379 +0000 UTC m=+1.536800827 container died d4345ff4317f0592e34ce063e13e47b0ac62037937c4ace835bdb4feaffcd64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shirley, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:59:55 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 10 23:59:55 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d6a4f86470795bcb905e88ead30462a57257d36593aa8567bc7eac05a26bff01-merged.mount: Deactivated successfully.
Oct 10 23:59:55 np0005480824 podman[295421]: 2025-10-11 03:59:55.809920484 +0000 UTC m=+1.662626932 container remove d4345ff4317f0592e34ce063e13e47b0ac62037937c4ace835bdb4feaffcd64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:59:55 np0005480824 systemd[1]: libpod-conmon-d4345ff4317f0592e34ce063e13e47b0ac62037937c4ace835bdb4feaffcd64e.scope: Deactivated successfully.
Oct 10 23:59:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:55.961 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 23:59:55 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 03:59:55.962 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 23:59:55 np0005480824 nova_compute[260089]: 2025-10-11 03:59:55.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:56 np0005480824 nova_compute[260089]: 2025-10-11 03:59:56.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:56 np0005480824 podman[295619]: 2025-10-11 03:59:56.657866631 +0000 UTC m=+0.044032344 container create caa25a60af473faee64769aee1c2f0779dcbd6ef424b98357df704df04fac281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moore, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 23:59:56 np0005480824 systemd[1]: Started libpod-conmon-caa25a60af473faee64769aee1c2f0779dcbd6ef424b98357df704df04fac281.scope.
Oct 10 23:59:56 np0005480824 podman[295619]: 2025-10-11 03:59:56.641393942 +0000 UTC m=+0.027559675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:59:56 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:59:56 np0005480824 podman[295619]: 2025-10-11 03:59:56.785914227 +0000 UTC m=+0.172079970 container init caa25a60af473faee64769aee1c2f0779dcbd6ef424b98357df704df04fac281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moore, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:59:56 np0005480824 podman[295619]: 2025-10-11 03:59:56.795558388 +0000 UTC m=+0.181724101 container start caa25a60af473faee64769aee1c2f0779dcbd6ef424b98357df704df04fac281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moore, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:59:56 np0005480824 podman[295619]: 2025-10-11 03:59:56.798874245 +0000 UTC m=+0.185039958 container attach caa25a60af473faee64769aee1c2f0779dcbd6ef424b98357df704df04fac281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:59:56 np0005480824 recursing_moore[295634]: 167 167
Oct 10 23:59:56 np0005480824 systemd[1]: libpod-caa25a60af473faee64769aee1c2f0779dcbd6ef424b98357df704df04fac281.scope: Deactivated successfully.
Oct 10 23:59:56 np0005480824 podman[295619]: 2025-10-11 03:59:56.804885773 +0000 UTC m=+0.191051496 container died caa25a60af473faee64769aee1c2f0779dcbd6ef424b98357df704df04fac281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:59:56 np0005480824 systemd[1]: var-lib-containers-storage-overlay-64a32a94a78d78eae155e9e63fc1e9c8329f3e301d40c8ac4d29f792a1061adb-merged.mount: Deactivated successfully.
Oct 10 23:59:56 np0005480824 podman[295619]: 2025-10-11 03:59:56.856851428 +0000 UTC m=+0.243017141 container remove caa25a60af473faee64769aee1c2f0779dcbd6ef424b98357df704df04fac281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:59:56 np0005480824 systemd[1]: libpod-conmon-caa25a60af473faee64769aee1c2f0779dcbd6ef424b98357df704df04fac281.scope: Deactivated successfully.
Oct 10 23:59:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1621: 321 pgs: 321 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.6 MiB/s wr, 160 op/s
Oct 10 23:59:56 np0005480824 nova_compute[260089]: 2025-10-11 03:59:56.985 2 DEBUG nova.compute.manager [req-9499a3b0-e73a-4d84-90a0-0c23513bc93b req-d42e7d58-4cde-49b6-b814-3c378b14dd45 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Received event network-changed-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 23:59:56 np0005480824 nova_compute[260089]: 2025-10-11 03:59:56.987 2 DEBUG nova.compute.manager [req-9499a3b0-e73a-4d84-90a0-0c23513bc93b req-d42e7d58-4cde-49b6-b814-3c378b14dd45 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Refreshing instance network info cache due to event network-changed-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 23:59:56 np0005480824 nova_compute[260089]: 2025-10-11 03:59:56.987 2 DEBUG oslo_concurrency.lockutils [req-9499a3b0-e73a-4d84-90a0-0c23513bc93b req-d42e7d58-4cde-49b6-b814-3c378b14dd45 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-9d89b9fc-eda1-4801-8670-e3e48a9e04ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 23:59:56 np0005480824 nova_compute[260089]: 2025-10-11 03:59:56.987 2 DEBUG oslo_concurrency.lockutils [req-9499a3b0-e73a-4d84-90a0-0c23513bc93b req-d42e7d58-4cde-49b6-b814-3c378b14dd45 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-9d89b9fc-eda1-4801-8670-e3e48a9e04ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 23:59:56 np0005480824 nova_compute[260089]: 2025-10-11 03:59:56.987 2 DEBUG nova.network.neutron [req-9499a3b0-e73a-4d84-90a0-0c23513bc93b req-d42e7d58-4cde-49b6-b814-3c378b14dd45 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Refreshing network info cache for port 6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 23:59:57 np0005480824 podman[295656]: 2025-10-11 03:59:57.085440468 +0000 UTC m=+0.055502778 container create 5e6c50c245a562c8241f6241d56fe671f115430941d33689b94a8f5bad1a150d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 10 23:59:57 np0005480824 systemd[1]: Started libpod-conmon-5e6c50c245a562c8241f6241d56fe671f115430941d33689b94a8f5bad1a150d.scope.
Oct 10 23:59:57 np0005480824 podman[295656]: 2025-10-11 03:59:57.057454544 +0000 UTC m=+0.027516934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:59:57 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:59:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85e5ff4776784a564f5f2abcd9be14d309382c0d7030e9a394d82c545197c12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85e5ff4776784a564f5f2abcd9be14d309382c0d7030e9a394d82c545197c12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85e5ff4776784a564f5f2abcd9be14d309382c0d7030e9a394d82c545197c12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85e5ff4776784a564f5f2abcd9be14d309382c0d7030e9a394d82c545197c12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:57 np0005480824 podman[295656]: 2025-10-11 03:59:57.192324487 +0000 UTC m=+0.162386807 container init 5e6c50c245a562c8241f6241d56fe671f115430941d33689b94a8f5bad1a150d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_haslett, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 23:59:57 np0005480824 podman[295656]: 2025-10-11 03:59:57.201309373 +0000 UTC m=+0.171371693 container start 5e6c50c245a562c8241f6241d56fe671f115430941d33689b94a8f5bad1a150d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:59:57 np0005480824 podman[295656]: 2025-10-11 03:59:57.211637711 +0000 UTC m=+0.181700041 container attach 5e6c50c245a562c8241f6241d56fe671f115430941d33689b94a8f5bad1a150d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 10 23:59:57 np0005480824 ovn_controller[152667]: 2025-10-11T03:59:57Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:91:5e:e0 10.100.0.3
Oct 10 23:59:57 np0005480824 ovn_controller[152667]: 2025-10-11T03:59:57Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:91:5e:e0 10.100.0.3
Oct 10 23:59:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:59:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:59:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:59:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:59:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 10 23:59:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 10 23:59:58 np0005480824 competent_haslett[295673]: {
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:    "0": [
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:        {
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "devices": [
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "/dev/loop3"
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            ],
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_name": "ceph_lv0",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_size": "21470642176",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "name": "ceph_lv0",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "tags": {
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.cluster_name": "ceph",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.crush_device_class": "",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.encrypted": "0",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.osd_id": "0",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.type": "block",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.vdo": "0"
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            },
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "type": "block",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "vg_name": "ceph_vg0"
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:        }
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:    ],
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:    "1": [
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:        {
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "devices": [
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "/dev/loop4"
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            ],
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_name": "ceph_lv1",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_size": "21470642176",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "name": "ceph_lv1",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "tags": {
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.cluster_name": "ceph",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.crush_device_class": "",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.encrypted": "0",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.osd_id": "1",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.type": "block",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.vdo": "0"
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            },
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "type": "block",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "vg_name": "ceph_vg1"
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:        }
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:    ],
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:    "2": [
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:        {
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "devices": [
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "/dev/loop5"
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            ],
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_name": "ceph_lv2",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_size": "21470642176",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "name": "ceph_lv2",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "tags": {
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.cephx_lockbox_secret": "",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.cluster_name": "ceph",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.crush_device_class": "",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.encrypted": "0",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.osd_id": "2",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.type": "block",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:                "ceph.vdo": "0"
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            },
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "type": "block",
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:            "vg_name": "ceph_vg2"
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:        }
Oct 10 23:59:58 np0005480824 competent_haslett[295673]:    ]
Oct 10 23:59:58 np0005480824 competent_haslett[295673]: }
Oct 10 23:59:58 np0005480824 systemd[1]: libpod-5e6c50c245a562c8241f6241d56fe671f115430941d33689b94a8f5bad1a150d.scope: Deactivated successfully.
Oct 10 23:59:58 np0005480824 podman[295682]: 2025-10-11 03:59:58.143446178 +0000 UTC m=+0.036333367 container died 5e6c50c245a562c8241f6241d56fe671f115430941d33689b94a8f5bad1a150d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:59:58 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e85e5ff4776784a564f5f2abcd9be14d309382c0d7030e9a394d82c545197c12-merged.mount: Deactivated successfully.
Oct 10 23:59:58 np0005480824 podman[295682]: 2025-10-11 03:59:58.221175306 +0000 UTC m=+0.114062475 container remove 5e6c50c245a562c8241f6241d56fe671f115430941d33689b94a8f5bad1a150d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 23:59:58 np0005480824 systemd[1]: libpod-conmon-5e6c50c245a562c8241f6241d56fe671f115430941d33689b94a8f5bad1a150d.scope: Deactivated successfully.
Oct 10 23:59:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 4.2 MiB/s rd, 13 MiB/s wr, 280 op/s
Oct 10 23:59:59 np0005480824 nova_compute[260089]: 2025-10-11 03:59:59.009 2 DEBUG nova.network.neutron [req-9499a3b0-e73a-4d84-90a0-0c23513bc93b req-d42e7d58-4cde-49b6-b814-3c378b14dd45 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Updated VIF entry in instance network info cache for port 6b86c387-3e59-4e3b-a7e3-e1ddfc541c50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 23:59:59 np0005480824 nova_compute[260089]: 2025-10-11 03:59:59.010 2 DEBUG nova.network.neutron [req-9499a3b0-e73a-4d84-90a0-0c23513bc93b req-d42e7d58-4cde-49b6-b814-3c378b14dd45 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Updating instance_info_cache with network_info: [{"id": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "address": "fa:16:3e:aa:03:d2", "network": {"id": "dfca432f-447a-432a-acc4-3a23e93eb8d6", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1081477901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e504d8715354886aaae057de71d2d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86c387-3e", "ovs_interfaceid": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 23:59:59 np0005480824 nova_compute[260089]: 2025-10-11 03:59:59.046 2 DEBUG oslo_concurrency.lockutils [req-9499a3b0-e73a-4d84-90a0-0c23513bc93b req-d42e7d58-4cde-49b6-b814-3c378b14dd45 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-9d89b9fc-eda1-4801-8670-e3e48a9e04ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 23:59:59 np0005480824 podman[295839]: 2025-10-11 03:59:59.111583201 +0000 UTC m=+0.079428809 container create fb90363e2934454613a57d957d728080a660916f6025e74b5561ef769f3a8b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 23:59:59 np0005480824 systemd[1]: Started libpod-conmon-fb90363e2934454613a57d957d728080a660916f6025e74b5561ef769f3a8b6a.scope.
Oct 10 23:59:59 np0005480824 podman[295839]: 2025-10-11 03:59:59.085432059 +0000 UTC m=+0.053277707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:59:59 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:59:59 np0005480824 podman[295839]: 2025-10-11 03:59:59.233855494 +0000 UTC m=+0.201701112 container init fb90363e2934454613a57d957d728080a660916f6025e74b5561ef769f3a8b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_herschel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 10 23:59:59 np0005480824 podman[295839]: 2025-10-11 03:59:59.244310864 +0000 UTC m=+0.212156472 container start fb90363e2934454613a57d957d728080a660916f6025e74b5561ef769f3a8b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_herschel, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 23:59:59 np0005480824 interesting_herschel[295855]: 167 167
Oct 10 23:59:59 np0005480824 systemd[1]: libpod-fb90363e2934454613a57d957d728080a660916f6025e74b5561ef769f3a8b6a.scope: Deactivated successfully.
Oct 10 23:59:59 np0005480824 podman[295839]: 2025-10-11 03:59:59.261465729 +0000 UTC m=+0.229311337 container attach fb90363e2934454613a57d957d728080a660916f6025e74b5561ef769f3a8b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:59:59 np0005480824 podman[295839]: 2025-10-11 03:59:59.262119784 +0000 UTC m=+0.229965392 container died fb90363e2934454613a57d957d728080a660916f6025e74b5561ef769f3a8b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:59:59 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ae43bfde555fe47cc0bdbcd98b8d3cac0ca241279c9e66207ed8d534313f7e54-merged.mount: Deactivated successfully.
Oct 10 23:59:59 np0005480824 nova_compute[260089]: 2025-10-11 03:59:59.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 23:59:59 np0005480824 podman[295839]: 2025-10-11 03:59:59.325567453 +0000 UTC m=+0.293413101 container remove fb90363e2934454613a57d957d728080a660916f6025e74b5561ef769f3a8b6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 23:59:59 np0005480824 systemd[1]: libpod-conmon-fb90363e2934454613a57d957d728080a660916f6025e74b5561ef769f3a8b6a.scope: Deactivated successfully.
Oct 10 23:59:59 np0005480824 podman[295879]: 2025-10-11 03:59:59.614312637 +0000 UTC m=+0.064391843 container create 9cdb6f6f6cde2e01cab4b7758f2f3ecb041140f0a2bad642d0a035dcb3cd6aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 10 23:59:59 np0005480824 systemd[1]: Started libpod-conmon-9cdb6f6f6cde2e01cab4b7758f2f3ecb041140f0a2bad642d0a035dcb3cd6aba.scope.
Oct 10 23:59:59 np0005480824 podman[295879]: 2025-10-11 03:59:59.588127814 +0000 UTC m=+0.038207070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 10 23:59:59 np0005480824 systemd[1]: Started libcrun container.
Oct 10 23:59:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9dd526be086dd58300033f069bcbc9a8f674d7535b33511770d4fdef35442f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9dd526be086dd58300033f069bcbc9a8f674d7535b33511770d4fdef35442f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9dd526be086dd58300033f069bcbc9a8f674d7535b33511770d4fdef35442f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9dd526be086dd58300033f069bcbc9a8f674d7535b33511770d4fdef35442f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 23:59:59 np0005480824 podman[295879]: 2025-10-11 03:59:59.736310093 +0000 UTC m=+0.186389319 container init 9cdb6f6f6cde2e01cab4b7758f2f3ecb041140f0a2bad642d0a035dcb3cd6aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bassi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 23:59:59 np0005480824 podman[295879]: 2025-10-11 03:59:59.748039453 +0000 UTC m=+0.198118689 container start 9cdb6f6f6cde2e01cab4b7758f2f3ecb041140f0a2bad642d0a035dcb3cd6aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bassi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 10 23:59:59 np0005480824 podman[295879]: 2025-10-11 03:59:59.752473924 +0000 UTC m=+0.202553150 container attach 9cdb6f6f6cde2e01cab4b7758f2f3ecb041140f0a2bad642d0a035dcb3cd6aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bassi, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 00:00:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]: {
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "osd_id": 0,
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "type": "bluestore"
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:    },
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "osd_id": 1,
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "type": "bluestore"
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:    },
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "osd_id": 2,
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:        "type": "bluestore"
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]:    }
Oct 11 00:00:00 np0005480824 pedantic_bassi[295896]: }
Oct 11 00:00:00 np0005480824 systemd[1]: libpod-9cdb6f6f6cde2e01cab4b7758f2f3ecb041140f0a2bad642d0a035dcb3cd6aba.scope: Deactivated successfully.
Oct 11 00:00:00 np0005480824 podman[295879]: 2025-10-11 04:00:00.847356183 +0000 UTC m=+1.297435399 container died 9cdb6f6f6cde2e01cab4b7758f2f3ecb041140f0a2bad642d0a035dcb3cd6aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bassi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:00:00 np0005480824 systemd[1]: libpod-9cdb6f6f6cde2e01cab4b7758f2f3ecb041140f0a2bad642d0a035dcb3cd6aba.scope: Consumed 1.101s CPU time.
Oct 11 00:00:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 13 MiB/s wr, 206 op/s
Oct 11 00:00:00 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:00.968 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:00 np0005480824 nova_compute[260089]: 2025-10-11 04:00:00.983 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:00 np0005480824 nova_compute[260089]: 2025-10-11 04:00:00.984 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:00 np0005480824 nova_compute[260089]: 2025-10-11 04:00:00.998 2 DEBUG nova.compute.manager [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 00:00:01 np0005480824 systemd[1]: var-lib-containers-storage-overlay-8c9dd526be086dd58300033f069bcbc9a8f674d7535b33511770d4fdef35442f-merged.mount: Deactivated successfully.
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.074 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.075 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.085 2 DEBUG nova.virt.hardware [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.086 2 INFO nova.compute.claims [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 00:00:01 np0005480824 podman[295879]: 2025-10-11 04:00:01.094884648 +0000 UTC m=+1.544963854 container remove 9cdb6f6f6cde2e01cab4b7758f2f3ecb041140f0a2bad642d0a035dcb3cd6aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bassi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 00:00:01 np0005480824 systemd[1]: libpod-conmon-9cdb6f6f6cde2e01cab4b7758f2f3ecb041140f0a2bad642d0a035dcb3cd6aba.scope: Deactivated successfully.
Oct 11 00:00:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 00:00:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:00:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 00:00:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:00:01 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 5a5e509f-7ed0-46d7-b7ee-b73ae1b73db0 does not exist
Oct 11 00:00:01 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 5e8d892c-af70-4f1a-9130-2e51081cdc20 does not exist
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.247 2 DEBUG oslo_concurrency.processutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:00:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3047465950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.717 2 DEBUG oslo_concurrency.processutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.726 2 DEBUG nova.compute.provider_tree [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.740 2 DEBUG nova.scheduler.client.report [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.783 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.783 2 DEBUG nova.compute.manager [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.845 2 DEBUG nova.compute.manager [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.846 2 DEBUG nova.network.neutron [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.867 2 INFO nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.912 2 DEBUG nova.compute.manager [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 00:00:01 np0005480824 nova_compute[260089]: 2025-10-11 04:00:01.997 2 INFO nova.virt.block_device [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Booting with volume 184f1559-4821-437a-8e6b-6e10ab7ba1e9 at /dev/vda#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.144 2 DEBUG os_brick.utils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.145 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:02 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:00:02 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.162 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.162 676 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebb2599-29d2-46a7-bddc-d8073366c980]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.163 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.174 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.175 676 DEBUG oslo.privsep.daemon [-] privsep: reply[0e709f56-0bca-4bba-839d-24cf80bda5d6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.177 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.187 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.188 676 DEBUG oslo.privsep.daemon [-] privsep: reply[f64a2233-5f73-4e5c-a469-89f8be92239d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.189 676 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a6f993-0cc6-48ca-8c6b-7732d53d8d48]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.190 2 DEBUG oslo_concurrency.processutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.218 2 DEBUG oslo_concurrency.processutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.220 2 DEBUG os_brick.initiator.connectors.lightos [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.220 2 DEBUG os_brick.initiator.connectors.lightos [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.221 2 DEBUG os_brick.initiator.connectors.lightos [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.221 2 DEBUG os_brick.utils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.221 2 DEBUG nova.virt.block_device [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Updating existing volume attachment record: a924e004-a550-48ff-b816-672af91b6dc9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 11 00:00:02 np0005480824 nova_compute[260089]: 2025-10-11 04:00:02.676 2 DEBUG nova.policy [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eccc3f574d354840901d28dad2488bf4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0e73ded2f2ee46b4a7485c01ef1b73e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 00:00:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:00:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2803209174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:00:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 13 MiB/s wr, 213 op/s
Oct 11 00:00:03 np0005480824 nova_compute[260089]: 2025-10-11 04:00:03.156 2 DEBUG nova.compute.manager [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 00:00:03 np0005480824 nova_compute[260089]: 2025-10-11 04:00:03.159 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 00:00:03 np0005480824 nova_compute[260089]: 2025-10-11 04:00:03.160 2 INFO nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Creating image(s)#033[00m
Oct 11 00:00:03 np0005480824 nova_compute[260089]: 2025-10-11 04:00:03.161 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 11 00:00:03 np0005480824 nova_compute[260089]: 2025-10-11 04:00:03.161 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Ensure instance console log exists: /var/lib/nova/instances/a010ce52-5e6a-44bd-8bc6-4151b2e1f528/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 00:00:03 np0005480824 nova_compute[260089]: 2025-10-11 04:00:03.162 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:03 np0005480824 nova_compute[260089]: 2025-10-11 04:00:03.163 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:03 np0005480824 nova_compute[260089]: 2025-10-11 04:00:03.163 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:03 np0005480824 nova_compute[260089]: 2025-10-11 04:00:03.391 2 DEBUG nova.network.neutron [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Successfully created port: b09e90b7-14bf-425e-bbd7-78f4c2dea771 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 00:00:04 np0005480824 nova_compute[260089]: 2025-10-11 04:00:04.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:04 np0005480824 nova_compute[260089]: 2025-10-11 04:00:04.708 2 DEBUG nova.network.neutron [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Successfully updated port: b09e90b7-14bf-425e-bbd7-78f4c2dea771 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 00:00:04 np0005480824 nova_compute[260089]: 2025-10-11 04:00:04.727 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "refresh_cache-a010ce52-5e6a-44bd-8bc6-4151b2e1f528" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:00:04 np0005480824 nova_compute[260089]: 2025-10-11 04:00:04.728 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquired lock "refresh_cache-a010ce52-5e6a-44bd-8bc6-4151b2e1f528" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:00:04 np0005480824 nova_compute[260089]: 2025-10-11 04:00:04.728 2 DEBUG nova.network.neutron [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 00:00:04 np0005480824 nova_compute[260089]: 2025-10-11 04:00:04.817 2 DEBUG nova.compute.manager [req-4881a83d-6445-4359-b864-72b2bb73492c req-4199cf34-1ca7-407f-8481-e419657e26a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Received event network-changed-b09e90b7-14bf-425e-bbd7-78f4c2dea771 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:00:04 np0005480824 nova_compute[260089]: 2025-10-11 04:00:04.818 2 DEBUG nova.compute.manager [req-4881a83d-6445-4359-b864-72b2bb73492c req-4199cf34-1ca7-407f-8481-e419657e26a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Refreshing instance network info cache due to event network-changed-b09e90b7-14bf-425e-bbd7-78f4c2dea771. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 00:00:04 np0005480824 nova_compute[260089]: 2025-10-11 04:00:04.818 2 DEBUG oslo_concurrency.lockutils [req-4881a83d-6445-4359-b864-72b2bb73492c req-4199cf34-1ca7-407f-8481-e419657e26a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-a010ce52-5e6a-44bd-8bc6-4151b2e1f528" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:00:04 np0005480824 nova_compute[260089]: 2025-10-11 04:00:04.891 2 DEBUG nova.network.neutron [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 00:00:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 11 MiB/s wr, 181 op/s
Oct 11 00:00:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.072 2 DEBUG nova.network.neutron [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Updating instance_info_cache with network_info: [{"id": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "address": "fa:16:3e:02:b8:a9", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb09e90b7-14", "ovs_interfaceid": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.092 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Releasing lock "refresh_cache-a010ce52-5e6a-44bd-8bc6-4151b2e1f528" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.093 2 DEBUG nova.compute.manager [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Instance network_info: |[{"id": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "address": "fa:16:3e:02:b8:a9", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb09e90b7-14", "ovs_interfaceid": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.093 2 DEBUG oslo_concurrency.lockutils [req-4881a83d-6445-4359-b864-72b2bb73492c req-4199cf34-1ca7-407f-8481-e419657e26a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-a010ce52-5e6a-44bd-8bc6-4151b2e1f528" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.094 2 DEBUG nova.network.neutron [req-4881a83d-6445-4359-b864-72b2bb73492c req-4199cf34-1ca7-407f-8481-e419657e26a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Refreshing network info cache for port b09e90b7-14bf-425e-bbd7-78f4c2dea771 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.098 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Start _get_guest_xml network_info=[{"id": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "address": "fa:16:3e:02:b8:a9", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb09e90b7-14", "ovs_interfaceid": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'a924e004-a550-48ff-b816-672af91b6dc9', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-184f1559-4821-437a-8e6b-6e10ab7ba1e9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '184f1559-4821-437a-8e6b-6e10ab7ba1e9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'a010ce52-5e6a-44bd-8bc6-4151b2e1f528', 'attached_at': '', 'detached_at': '', 'volume_id': '184f1559-4821-437a-8e6b-6e10ab7ba1e9', 'serial': '184f1559-4821-437a-8e6b-6e10ab7ba1e9'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.104 2 WARNING nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.114 2 DEBUG nova.virt.libvirt.host [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.115 2 DEBUG nova.virt.libvirt.host [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.119 2 DEBUG nova.virt.libvirt.host [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.120 2 DEBUG nova.virt.libvirt.host [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.120 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.120 2 DEBUG nova.virt.hardware [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.121 2 DEBUG nova.virt.hardware [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.121 2 DEBUG nova.virt.hardware [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.122 2 DEBUG nova.virt.hardware [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.122 2 DEBUG nova.virt.hardware [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.122 2 DEBUG nova.virt.hardware [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.122 2 DEBUG nova.virt.hardware [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.123 2 DEBUG nova.virt.hardware [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.123 2 DEBUG nova.virt.hardware [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.123 2 DEBUG nova.virt.hardware [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.124 2 DEBUG nova.virt.hardware [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.161 2 DEBUG nova.storage.rbd_utils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] rbd image a010ce52-5e6a-44bd-8bc6-4151b2e1f528_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.166 2 DEBUG oslo_concurrency.processutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:00:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2077704388' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.669 2 DEBUG oslo_concurrency.processutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.862 2 DEBUG os_brick.encryptors [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Using volume encryption metadata '{'encryption_key_id': 'aca4c86e-34b9-4e81-afc3-e9d2343986b8', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-184f1559-4821-437a-8e6b-6e10ab7ba1e9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '184f1559-4821-437a-8e6b-6e10ab7ba1e9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'a010ce52-5e6a-44bd-8bc6-4151b2e1f528', 'attached_at': '', 'detached_at': '', 'volume_id': '184f1559-4821-437a-8e6b-6e10ab7ba1e9', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.865 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct 11 00:00:06 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:06Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:03:d2 10.100.0.12
Oct 11 00:00:06 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:06Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:03:d2 10.100.0.12
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.883 2 DEBUG barbicanclient.v1.secrets [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.884 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 12 MiB/s wr, 197 op/s
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.917 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.918 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.945 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.946 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.982 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:06 np0005480824 nova_compute[260089]: 2025-10-11 04:00:06.983 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.007 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.007 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.031 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.031 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.057 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.058 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.087 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.088 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.108 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.108 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.134 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.134 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.172 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.173 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.198 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.198 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.221 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.222 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.257 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.258 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.276 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.277 2 INFO barbicanclient.base [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/aca4c86e-34b9-4e81-afc3-e9d2343986b8#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.296 2 DEBUG barbicanclient.client [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.296 2 DEBUG nova.virt.libvirt.host [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  <usage type="volume">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <volume>184f1559-4821-437a-8e6b-6e10ab7ba1e9</volume>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  </usage>
Oct 11 00:00:07 np0005480824 nova_compute[260089]: </secret>
Oct 11 00:00:07 np0005480824 nova_compute[260089]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.323 2 DEBUG nova.virt.libvirt.vif [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:59:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2104860261',display_name='tempest-TransferEncryptedVolumeTest-server-2104860261',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2104860261',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSd/imVrPUoZZ0pPNaeX2vqRyFwUZkYkGtRIGLvkZ+JmbpGCVAFlpb2xMevRN2guCRk7QItwPxlNbBPPCGkv6m7D9V9P6ik2vYr9GNZ8E+yfq+aSt3aD3tvswV1nTE1Iw==',key_name='tempest-TransferEncryptedVolumeTest-1221687079',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e73ded2f2ee46b4a7485c01ef1b73e9',ramdisk_id='',reservation_id='r-yylaj4ah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1815435088',owner_user_name='tempest-TransferEncryptedVolumeTest-1815435088-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:00:01Z,user_data=None,user_id='eccc3f574d354840901d28dad2488bf4',uuid=a010ce52-5e6a-44bd-8bc6-4151b2e1f528,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "address": "fa:16:3e:02:b8:a9", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb09e90b7-14", "ovs_interfaceid": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.323 2 DEBUG nova.network.os_vif_util [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converting VIF {"id": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "address": "fa:16:3e:02:b8:a9", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb09e90b7-14", "ovs_interfaceid": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.324 2 DEBUG nova.network.os_vif_util [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b8:a9,bridge_name='br-int',has_traffic_filtering=True,id=b09e90b7-14bf-425e-bbd7-78f4c2dea771,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb09e90b7-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.325 2 DEBUG nova.objects.instance [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid a010ce52-5e6a-44bd-8bc6-4151b2e1f528 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.340 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] End _get_guest_xml xml=<domain type="kvm">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  <uuid>a010ce52-5e6a-44bd-8bc6-4151b2e1f528</uuid>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  <name>instance-00000018</name>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  <metadata>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-2104860261</nova:name>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 04:00:06</nova:creationTime>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:        <nova:user uuid="eccc3f574d354840901d28dad2488bf4">tempest-TransferEncryptedVolumeTest-1815435088-project-member</nova:user>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:        <nova:project uuid="0e73ded2f2ee46b4a7485c01ef1b73e9">tempest-TransferEncryptedVolumeTest-1815435088</nova:project>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:        <nova:port uuid="b09e90b7-14bf-425e-bbd7-78f4c2dea771">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:        </nova:port>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  </metadata>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <system>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <entry name="serial">a010ce52-5e6a-44bd-8bc6-4151b2e1f528</entry>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <entry name="uuid">a010ce52-5e6a-44bd-8bc6-4151b2e1f528</entry>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    </system>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  <os>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  </os>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  <features>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <acpi/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <apic/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  </features>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  </clock>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  </cpu>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  <devices>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/a010ce52-5e6a-44bd-8bc6-4151b2e1f528_disk.config">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      </source>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      </auth>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    </disk>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-184f1559-4821-437a-8e6b-6e10ab7ba1e9">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      </source>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      </auth>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <serial>184f1559-4821-437a-8e6b-6e10ab7ba1e9</serial>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <encryption format="luks">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:        <secret type="passphrase" uuid="75d975ec-d67f-4d78-bb66-3207578a663a"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      </encryption>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    </disk>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:02:b8:a9"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <target dev="tapb09e90b7-14"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    </interface>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/a010ce52-5e6a-44bd-8bc6-4151b2e1f528/console.log" append="off"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    </serial>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <video>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    </video>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    </rng>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 11 00:00:07 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:    </memballoon>
Oct 11 00:00:07 np0005480824 nova_compute[260089]:  </devices>
Oct 11 00:00:07 np0005480824 nova_compute[260089]: </domain>
Oct 11 00:00:07 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.342 2 DEBUG nova.compute.manager [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Preparing to wait for external event network-vif-plugged-b09e90b7-14bf-425e-bbd7-78f4c2dea771 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.342 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.343 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.343 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.344 2 DEBUG nova.virt.libvirt.vif [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T03:59:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2104860261',display_name='tempest-TransferEncryptedVolumeTest-server-2104860261',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2104860261',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSd/imVrPUoZZ0pPNaeX2vqRyFwUZkYkGtRIGLvkZ+JmbpGCVAFlpb2xMevRN2guCRk7QItwPxlNbBPPCGkv6m7D9V9P6ik2vYr9GNZ8E+yfq+aSt3aD3tvswV1nTE1Iw==',key_name='tempest-TransferEncryptedVolumeTest-1221687079',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e73ded2f2ee46b4a7485c01ef1b73e9',ramdisk_id='',reservation_id='r-yylaj4ah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1815435088',owner_user_name='tempest-TransferEncryptedVolumeTest-1815435088-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:00:01Z,user_data=None,user_id='eccc3f574d354840901d28dad2488bf4',uuid=a010ce52-5e6a-44bd-8bc6-4151b2e1f528,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "address": "fa:16:3e:02:b8:a9", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb09e90b7-14", "ovs_interfaceid": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.345 2 DEBUG nova.network.os_vif_util [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converting VIF {"id": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "address": "fa:16:3e:02:b8:a9", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb09e90b7-14", "ovs_interfaceid": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.346 2 DEBUG nova.network.os_vif_util [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b8:a9,bridge_name='br-int',has_traffic_filtering=True,id=b09e90b7-14bf-425e-bbd7-78f4c2dea771,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb09e90b7-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.347 2 DEBUG os_vif [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b8:a9,bridge_name='br-int',has_traffic_filtering=True,id=b09e90b7-14bf-425e-bbd7-78f4c2dea771,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb09e90b7-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.349 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.354 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb09e90b7-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.355 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb09e90b7-14, col_values=(('external_ids', {'iface-id': 'b09e90b7-14bf-425e-bbd7-78f4c2dea771', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:b8:a9', 'vm-uuid': 'a010ce52-5e6a-44bd-8bc6-4151b2e1f528'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:07 np0005480824 NetworkManager[44969]: <info>  [1760155207.3578] manager: (tapb09e90b7-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.368 2 INFO os_vif [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b8:a9,bridge_name='br-int',has_traffic_filtering=True,id=b09e90b7-14bf-425e-bbd7-78f4c2dea771,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb09e90b7-14')#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.429 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.430 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.430 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] No VIF found with MAC fa:16:3e:02:b8:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.432 2 INFO nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Using config drive#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.472 2 DEBUG nova.storage.rbd_utils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] rbd image a010ce52-5e6a-44bd-8bc6-4151b2e1f528_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:00:07 np0005480824 podman[296065]: 2025-10-11 04:00:07.50382356 +0000 UTC m=+0.090838460 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 00:00:07 np0005480824 podman[296066]: 2025-10-11 04:00:07.517441834 +0000 UTC m=+0.105654182 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.779 2 INFO nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Creating config drive at /var/lib/nova/instances/a010ce52-5e6a-44bd-8bc6-4151b2e1f528/disk.config#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.788 2 DEBUG oslo_concurrency.processutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a010ce52-5e6a-44bd-8bc6-4151b2e1f528/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp622wmzdh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.936 2 DEBUG oslo_concurrency.processutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a010ce52-5e6a-44bd-8bc6-4151b2e1f528/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp622wmzdh" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:07 np0005480824 nova_compute[260089]: 2025-10-11 04:00:07.977 2 DEBUG nova.storage.rbd_utils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] rbd image a010ce52-5e6a-44bd-8bc6-4151b2e1f528_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.019 2 DEBUG oslo_concurrency.processutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a010ce52-5e6a-44bd-8bc6-4151b2e1f528/disk.config a010ce52-5e6a-44bd-8bc6-4151b2e1f528_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.044 2 DEBUG nova.network.neutron [req-4881a83d-6445-4359-b864-72b2bb73492c req-4199cf34-1ca7-407f-8481-e419657e26a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Updated VIF entry in instance network info cache for port b09e90b7-14bf-425e-bbd7-78f4c2dea771. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.045 2 DEBUG nova.network.neutron [req-4881a83d-6445-4359-b864-72b2bb73492c req-4199cf34-1ca7-407f-8481-e419657e26a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Updating instance_info_cache with network_info: [{"id": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "address": "fa:16:3e:02:b8:a9", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb09e90b7-14", "ovs_interfaceid": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.065 2 DEBUG oslo_concurrency.lockutils [req-4881a83d-6445-4359-b864-72b2bb73492c req-4199cf34-1ca7-407f-8481-e419657e26a5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-a010ce52-5e6a-44bd-8bc6-4151b2e1f528" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.162 2 DEBUG oslo_concurrency.processutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a010ce52-5e6a-44bd-8bc6-4151b2e1f528/disk.config a010ce52-5e6a-44bd-8bc6-4151b2e1f528_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.162 2 INFO nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Deleting local config drive /var/lib/nova/instances/a010ce52-5e6a-44bd-8bc6-4151b2e1f528/disk.config because it was imported into RBD.#033[00m
Oct 11 00:00:08 np0005480824 kernel: tapb09e90b7-14: entered promiscuous mode
Oct 11 00:00:08 np0005480824 NetworkManager[44969]: <info>  [1760155208.2239] manager: (tapb09e90b7-14): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Oct 11 00:00:08 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:08Z|00213|binding|INFO|Claiming lport b09e90b7-14bf-425e-bbd7-78f4c2dea771 for this chassis.
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:08 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:08Z|00214|binding|INFO|b09e90b7-14bf-425e-bbd7-78f4c2dea771: Claiming fa:16:3e:02:b8:a9 10.100.0.13
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.236 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:b8:a9 10.100.0.13'], port_security=['fa:16:3e:02:b8:a9 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a010ce52-5e6a-44bd-8bc6-4151b2e1f528', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e73ded2f2ee46b4a7485c01ef1b73e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b966caac-3def-4c2a-badc-a92b0de92fd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f608fb9-f693-4a11-9617-6172f3d025df, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=b09e90b7-14bf-425e-bbd7-78f4c2dea771) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.237 162245 INFO neutron.agent.ovn.metadata.agent [-] Port b09e90b7-14bf-425e-bbd7-78f4c2dea771 in datapath 15a62ee0-8e34-4e49-990e-246b4ef9e0c6 bound to our chassis#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.243 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15a62ee0-8e34-4e49-990e-246b4ef9e0c6#033[00m
Oct 11 00:00:08 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:08Z|00215|binding|INFO|Setting lport b09e90b7-14bf-425e-bbd7-78f4c2dea771 ovn-installed in OVS
Oct 11 00:00:08 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:08Z|00216|binding|INFO|Setting lport b09e90b7-14bf-425e-bbd7-78f4c2dea771 up in Southbound
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.263 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[255ac7a7-ed63-4ed0-aa1a-f66cce3ce81a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.264 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap15a62ee0-81 in ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 00:00:08 np0005480824 systemd-udevd[296172]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.267 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap15a62ee0-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.267 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[dd21edae-df42-4839-84ae-a20ee688bb3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.268 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ce8ef03f-2b4b-4152-be5d-776e6709eb95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 systemd-machined[215071]: New machine qemu-24-instance-00000018.
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.281 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd87563-2f88-4b7c-81e5-0822878ad9ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 NetworkManager[44969]: <info>  [1760155208.2884] device (tapb09e90b7-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 00:00:08 np0005480824 NetworkManager[44969]: <info>  [1760155208.2900] device (tapb09e90b7-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 00:00:08 np0005480824 systemd[1]: Started Virtual Machine qemu-24-instance-00000018.
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.309 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0c490e-6fa6-4fcd-9d82-d68a0e80f048]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.342 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[7d25a83b-e36a-4109-b569-7ee84edf970d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 systemd-udevd[296177]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.350 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[3191f82f-1b36-4753-b31b-90ecfa6f397d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 NetworkManager[44969]: <info>  [1760155208.3529] manager: (tap15a62ee0-80): new Veth device (/org/freedesktop/NetworkManager/Devices/121)
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.384 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[ca235a95-06c3-4117-bfbb-67b2718930a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.387 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[12f7c08e-fae6-4f00-96fc-19d7862c7ebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 NetworkManager[44969]: <info>  [1760155208.4093] device (tap15a62ee0-80): carrier: link connected
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.414 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc62453-a99e-4f00-9c7b-f64ed155fee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.430 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[79b6d281-6335-4e3a-8cef-1d6c6589af20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15a62ee0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:91:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467607, 'reachable_time': 23307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296205, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.452 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[266a8616-d57c-462f-bd28-dafcca95bb50]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:91d9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467607, 'tstamp': 467607}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296206, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.477 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[15216f82-6edc-40c9-b0f6-12487c7a2e4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15a62ee0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:91:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467607, 'reachable_time': 23307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296207, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.506 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9c8c5f-618b-47c0-aaba-45c02dafa3b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.565 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4f78cb0c-0f75-4c47-9595-915e6aaa0c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.568 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15a62ee0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.568 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.568 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15a62ee0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:08 np0005480824 kernel: tap15a62ee0-80: entered promiscuous mode
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:08 np0005480824 NetworkManager[44969]: <info>  [1760155208.5716] manager: (tap15a62ee0-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.579 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15a62ee0-80, col_values=(('external_ids', {'iface-id': '182275c4-a015-4f7a-8877-9961b2382f67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:08 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:08Z|00217|binding|INFO|Releasing lport 182275c4-a015-4f7a-8877-9961b2382f67 from this chassis (sb_readonly=0)
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.597 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.597 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b9af4f-16d7-4450-b49d-8614cbfcaa9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.598 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: global
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-15a62ee0-8e34-4e49-990e-246b4ef9e0c6
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.pid.haproxy
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 15a62ee0-8e34-4e49-990e-246b4ef9e0c6
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 00:00:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:08.599 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'env', 'PROCESS_TAG=haproxy-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.667 2 DEBUG nova.compute.manager [req-74db4fa9-39c9-40c1-8a3d-e06dbe6dd802 req-34e700de-f765-4f51-873a-c18061d39b0b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Received event network-vif-plugged-b09e90b7-14bf-425e-bbd7-78f4c2dea771 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.668 2 DEBUG oslo_concurrency.lockutils [req-74db4fa9-39c9-40c1-8a3d-e06dbe6dd802 req-34e700de-f765-4f51-873a-c18061d39b0b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.668 2 DEBUG oslo_concurrency.lockutils [req-74db4fa9-39c9-40c1-8a3d-e06dbe6dd802 req-34e700de-f765-4f51-873a-c18061d39b0b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.668 2 DEBUG oslo_concurrency.lockutils [req-74db4fa9-39c9-40c1-8a3d-e06dbe6dd802 req-34e700de-f765-4f51-873a-c18061d39b0b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:08 np0005480824 nova_compute[260089]: 2025-10-11 04:00:08.669 2 DEBUG nova.compute.manager [req-74db4fa9-39c9-40c1-8a3d-e06dbe6dd802 req-34e700de-f765-4f51-873a-c18061d39b0b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Processing event network-vif-plugged-b09e90b7-14bf-425e-bbd7-78f4c2dea771 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 00:00:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1627: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.8 MiB/s rd, 11 MiB/s wr, 204 op/s
Oct 11 00:00:08 np0005480824 podman[296273]: 2025-10-11 04:00:08.952677882 +0000 UTC m=+0.058970907 container create 687b8dd7f44eb77dd4d450a9764ad12d05de0dd72150ccf3a8bd1fe494cfc387 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:00:08 np0005480824 systemd[1]: Started libpod-conmon-687b8dd7f44eb77dd4d450a9764ad12d05de0dd72150ccf3a8bd1fe494cfc387.scope.
Oct 11 00:00:09 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:00:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9673b32ee80085034a10a30c3e27bd8fa029316af542a221f73f67cb6dc11ee4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 00:00:09 np0005480824 podman[296273]: 2025-10-11 04:00:08.928484066 +0000 UTC m=+0.034777111 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 00:00:09 np0005480824 podman[296273]: 2025-10-11 04:00:09.031863995 +0000 UTC m=+0.138157030 container init 687b8dd7f44eb77dd4d450a9764ad12d05de0dd72150ccf3a8bd1fe494cfc387 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 11 00:00:09 np0005480824 podman[296273]: 2025-10-11 04:00:09.037266449 +0000 UTC m=+0.143559474 container start 687b8dd7f44eb77dd4d450a9764ad12d05de0dd72150ccf3a8bd1fe494cfc387 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 11 00:00:09 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[296290]: [NOTICE]   (296294) : New worker (296296) forked
Oct 11 00:00:09 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[296290]: [NOTICE]   (296294) : Loading success.
Oct 11 00:00:09 np0005480824 nova_compute[260089]: 2025-10-11 04:00:09.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:10.505 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:10.506 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:10.507 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:00:10 np0005480824 nova_compute[260089]: 2025-10-11 04:00:10.745 2 DEBUG nova.compute.manager [req-e01928fe-bf5b-4026-81c8-1d185c92d52b req-761ca610-2010-46e9-b403-0c7ec8e09e30 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Received event network-vif-plugged-b09e90b7-14bf-425e-bbd7-78f4c2dea771 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:00:10 np0005480824 nova_compute[260089]: 2025-10-11 04:00:10.746 2 DEBUG oslo_concurrency.lockutils [req-e01928fe-bf5b-4026-81c8-1d185c92d52b req-761ca610-2010-46e9-b403-0c7ec8e09e30 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:10 np0005480824 nova_compute[260089]: 2025-10-11 04:00:10.746 2 DEBUG oslo_concurrency.lockutils [req-e01928fe-bf5b-4026-81c8-1d185c92d52b req-761ca610-2010-46e9-b403-0c7ec8e09e30 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:10 np0005480824 nova_compute[260089]: 2025-10-11 04:00:10.746 2 DEBUG oslo_concurrency.lockutils [req-e01928fe-bf5b-4026-81c8-1d185c92d52b req-761ca610-2010-46e9-b403-0c7ec8e09e30 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:10 np0005480824 nova_compute[260089]: 2025-10-11 04:00:10.746 2 DEBUG nova.compute.manager [req-e01928fe-bf5b-4026-81c8-1d185c92d52b req-761ca610-2010-46e9-b403-0c7ec8e09e30 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] No waiting events found dispatching network-vif-plugged-b09e90b7-14bf-425e-bbd7-78f4c2dea771 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:00:10 np0005480824 nova_compute[260089]: 2025-10-11 04:00:10.746 2 WARNING nova.compute.manager [req-e01928fe-bf5b-4026-81c8-1d185c92d52b req-761ca610-2010-46e9-b403-0c7ec8e09e30 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Received unexpected event network-vif-plugged-b09e90b7-14bf-425e-bbd7-78f4c2dea771 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 00:00:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Oct 11 00:00:11 np0005480824 nova_compute[260089]: 2025-10-11 04:00:11.913 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155211.912986, a010ce52-5e6a-44bd-8bc6-4151b2e1f528 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:00:11 np0005480824 nova_compute[260089]: 2025-10-11 04:00:11.914 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] VM Started (Lifecycle Event)#033[00m
Oct 11 00:00:11 np0005480824 nova_compute[260089]: 2025-10-11 04:00:11.915 2 DEBUG nova.compute.manager [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 00:00:11 np0005480824 nova_compute[260089]: 2025-10-11 04:00:11.919 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 00:00:11 np0005480824 nova_compute[260089]: 2025-10-11 04:00:11.923 2 INFO nova.virt.libvirt.driver [-] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Instance spawned successfully.#033[00m
Oct 11 00:00:11 np0005480824 nova_compute[260089]: 2025-10-11 04:00:11.923 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 00:00:11 np0005480824 nova_compute[260089]: 2025-10-11 04:00:11.950 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:00:11 np0005480824 nova_compute[260089]: 2025-10-11 04:00:11.957 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 00:00:11 np0005480824 nova_compute[260089]: 2025-10-11 04:00:11.961 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:00:11 np0005480824 nova_compute[260089]: 2025-10-11 04:00:11.961 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:00:11 np0005480824 nova_compute[260089]: 2025-10-11 04:00:11.961 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:00:11 np0005480824 nova_compute[260089]: 2025-10-11 04:00:11.962 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:00:11 np0005480824 nova_compute[260089]: 2025-10-11 04:00:11.962 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:00:11 np0005480824 nova_compute[260089]: 2025-10-11 04:00:11.963 2 DEBUG nova.virt.libvirt.driver [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:00:12 np0005480824 nova_compute[260089]: 2025-10-11 04:00:12.047 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 00:00:12 np0005480824 nova_compute[260089]: 2025-10-11 04:00:12.048 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155211.9139059, a010ce52-5e6a-44bd-8bc6-4151b2e1f528 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:00:12 np0005480824 nova_compute[260089]: 2025-10-11 04:00:12.048 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] VM Paused (Lifecycle Event)#033[00m
Oct 11 00:00:12 np0005480824 nova_compute[260089]: 2025-10-11 04:00:12.091 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:00:12 np0005480824 nova_compute[260089]: 2025-10-11 04:00:12.095 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155211.9182284, a010ce52-5e6a-44bd-8bc6-4151b2e1f528 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:00:12 np0005480824 nova_compute[260089]: 2025-10-11 04:00:12.095 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] VM Resumed (Lifecycle Event)#033[00m
Oct 11 00:00:12 np0005480824 nova_compute[260089]: 2025-10-11 04:00:12.102 2 INFO nova.compute.manager [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Took 8.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 00:00:12 np0005480824 nova_compute[260089]: 2025-10-11 04:00:12.103 2 DEBUG nova.compute.manager [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:00:12 np0005480824 nova_compute[260089]: 2025-10-11 04:00:12.113 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:00:12 np0005480824 nova_compute[260089]: 2025-10-11 04:00:12.117 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 00:00:12 np0005480824 nova_compute[260089]: 2025-10-11 04:00:12.153 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 00:00:12 np0005480824 nova_compute[260089]: 2025-10-11 04:00:12.166 2 INFO nova.compute.manager [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Took 11.12 seconds to build instance.#033[00m
Oct 11 00:00:12 np0005480824 nova_compute[260089]: 2025-10-11 04:00:12.183 2 DEBUG oslo_concurrency.lockutils [None req-848effef-e058-4f49-8ef2-e65caba1bc14 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:12 np0005480824 nova_compute[260089]: 2025-10-11 04:00:12.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1629: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 785 KiB/s rd, 2.2 MiB/s wr, 116 op/s
Oct 11 00:00:14 np0005480824 podman[296312]: 2025-10-11 04:00:14.158989438 +0000 UTC m=+0.193563775 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 00:00:14 np0005480824 nova_compute[260089]: 2025-10-11 04:00:14.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 776 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Oct 11 00:00:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:00:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1631: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 886 KiB/s rd, 2.2 MiB/s wr, 113 op/s
Oct 11 00:00:17 np0005480824 nova_compute[260089]: 2025-10-11 04:00:17.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:18 np0005480824 nova_compute[260089]: 2025-10-11 04:00:18.258 2 DEBUG nova.compute.manager [req-9356b3a1-3a6c-4bcc-a292-ef7fa4b2a929 req-def2d6d6-e8ff-4f37-b1a3-7f01ea0ac6eb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Received event network-changed-b09e90b7-14bf-425e-bbd7-78f4c2dea771 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:00:18 np0005480824 nova_compute[260089]: 2025-10-11 04:00:18.259 2 DEBUG nova.compute.manager [req-9356b3a1-3a6c-4bcc-a292-ef7fa4b2a929 req-def2d6d6-e8ff-4f37-b1a3-7f01ea0ac6eb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Refreshing instance network info cache due to event network-changed-b09e90b7-14bf-425e-bbd7-78f4c2dea771. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 00:00:18 np0005480824 nova_compute[260089]: 2025-10-11 04:00:18.259 2 DEBUG oslo_concurrency.lockutils [req-9356b3a1-3a6c-4bcc-a292-ef7fa4b2a929 req-def2d6d6-e8ff-4f37-b1a3-7f01ea0ac6eb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-a010ce52-5e6a-44bd-8bc6-4151b2e1f528" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:00:18 np0005480824 nova_compute[260089]: 2025-10-11 04:00:18.260 2 DEBUG oslo_concurrency.lockutils [req-9356b3a1-3a6c-4bcc-a292-ef7fa4b2a929 req-def2d6d6-e8ff-4f37-b1a3-7f01ea0ac6eb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-a010ce52-5e6a-44bd-8bc6-4151b2e1f528" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:00:18 np0005480824 nova_compute[260089]: 2025-10-11 04:00:18.260 2 DEBUG nova.network.neutron [req-9356b3a1-3a6c-4bcc-a292-ef7fa4b2a929 req-def2d6d6-e8ff-4f37-b1a3-7f01ea0ac6eb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Refreshing network info cache for port b09e90b7-14bf-425e-bbd7-78f4c2dea771 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 00:00:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 147 op/s
Oct 11 00:00:19 np0005480824 nova_compute[260089]: 2025-10-11 04:00:19.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:19 np0005480824 nova_compute[260089]: 2025-10-11 04:00:19.479 2 DEBUG nova.network.neutron [req-9356b3a1-3a6c-4bcc-a292-ef7fa4b2a929 req-def2d6d6-e8ff-4f37-b1a3-7f01ea0ac6eb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Updated VIF entry in instance network info cache for port b09e90b7-14bf-425e-bbd7-78f4c2dea771. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 00:00:19 np0005480824 nova_compute[260089]: 2025-10-11 04:00:19.480 2 DEBUG nova.network.neutron [req-9356b3a1-3a6c-4bcc-a292-ef7fa4b2a929 req-def2d6d6-e8ff-4f37-b1a3-7f01ea0ac6eb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Updating instance_info_cache with network_info: [{"id": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "address": "fa:16:3e:02:b8:a9", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb09e90b7-14", "ovs_interfaceid": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:00:19 np0005480824 nova_compute[260089]: 2025-10-11 04:00:19.501 2 DEBUG oslo_concurrency.lockutils [req-9356b3a1-3a6c-4bcc-a292-ef7fa4b2a929 req-def2d6d6-e8ff-4f37-b1a3-7f01ea0ac6eb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-a010ce52-5e6a-44bd-8bc6-4151b2e1f528" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:00:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:00:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1633: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 44 KiB/s wr, 89 op/s
Oct 11 00:00:21 np0005480824 podman[296339]: 2025-10-11 04:00:21.020940143 +0000 UTC m=+0.073857240 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.165 2 DEBUG oslo_concurrency.lockutils [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.166 2 DEBUG oslo_concurrency.lockutils [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.180 2 DEBUG nova.objects.instance [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'flavor' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.209 2 DEBUG oslo_concurrency.lockutils [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.400 2 DEBUG oslo_concurrency.lockutils [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.400 2 DEBUG oslo_concurrency.lockutils [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.401 2 INFO nova.compute.manager [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Attaching volume 0852afeb-a7b4-4c98-a5f1-0f78ce361a5d to /dev/vdb#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.527 2 DEBUG os_brick.utils [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.528 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.539 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.540 676 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2aaaa1-76cd-4eed-a220-10f4bff81874]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.542 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.551 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.551 676 DEBUG oslo.privsep.daemon [-] privsep: reply[fc877d04-9bff-4b5b-b342-df5d4fb6c8b5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.554 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.564 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.564 676 DEBUG oslo.privsep.daemon [-] privsep: reply[76726e9f-04c9-40fd-a466-2a933de42b0a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.567 676 DEBUG oslo.privsep.daemon [-] privsep: reply[a2178e8b-3906-4bff-ba90-29fa5d73c14c]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.568 2 DEBUG oslo_concurrency.processutils [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.592 2 DEBUG oslo_concurrency.processutils [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.598 2 DEBUG os_brick.initiator.connectors.lightos [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.598 2 DEBUG os_brick.initiator.connectors.lightos [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.599 2 DEBUG os_brick.initiator.connectors.lightos [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.600 2 DEBUG os_brick.utils [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 11 00:00:22 np0005480824 nova_compute[260089]: 2025-10-11 04:00:22.601 2 DEBUG nova.virt.block_device [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Updating existing volume attachment record: da160afa-0499-40a4-b94f-aea89eb71040 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 11 00:00:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 45 KiB/s wr, 90 op/s
Oct 11 00:00:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:00:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3781373702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:00:23 np0005480824 nova_compute[260089]: 2025-10-11 04:00:23.292 2 DEBUG nova.objects.instance [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'flavor' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:23 np0005480824 nova_compute[260089]: 2025-10-11 04:00:23.311 2 DEBUG nova.virt.libvirt.driver [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Attempting to attach volume 0852afeb-a7b4-4c98-a5f1-0f78ce361a5d with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct 11 00:00:23 np0005480824 nova_compute[260089]: 2025-10-11 04:00:23.315 2 DEBUG nova.virt.libvirt.guest [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 00:00:23 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:00:23 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-0852afeb-a7b4-4c98-a5f1-0f78ce361a5d">
Oct 11 00:00:23 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:23 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:00:23 np0005480824 nova_compute[260089]:  <auth username="openstack">
Oct 11 00:00:23 np0005480824 nova_compute[260089]:    <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:00:23 np0005480824 nova_compute[260089]:  </auth>
Oct 11 00:00:23 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:00:23 np0005480824 nova_compute[260089]:  <serial>0852afeb-a7b4-4c98-a5f1-0f78ce361a5d</serial>
Oct 11 00:00:23 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:00:23 np0005480824 nova_compute[260089]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 11 00:00:23 np0005480824 nova_compute[260089]: 2025-10-11 04:00:23.421 2 DEBUG nova.virt.libvirt.driver [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:23 np0005480824 nova_compute[260089]: 2025-10-11 04:00:23.422 2 DEBUG nova.virt.libvirt.driver [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:23 np0005480824 nova_compute[260089]: 2025-10-11 04:00:23.422 2 DEBUG nova.virt.libvirt.driver [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:23 np0005480824 nova_compute[260089]: 2025-10-11 04:00:23.423 2 DEBUG nova.virt.libvirt.driver [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No VIF found with MAC fa:16:3e:91:5e:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 00:00:23 np0005480824 nova_compute[260089]: 2025-10-11 04:00:23.642 2 DEBUG oslo_concurrency.lockutils [None req-4d9036bd-57df-4d2e-8bec-9c2fc39e838e 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:24 np0005480824 nova_compute[260089]: 2025-10-11 04:00:24.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:24 np0005480824 nova_compute[260089]: 2025-10-11 04:00:24.492 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:00:24 np0005480824 nova_compute[260089]: 2025-10-11 04:00:24.534 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Triggering sync for uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 00:00:24 np0005480824 nova_compute[260089]: 2025-10-11 04:00:24.535 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Triggering sync for uuid 9d89b9fc-eda1-4801-8670-e3e48a9e04ae _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 00:00:24 np0005480824 nova_compute[260089]: 2025-10-11 04:00:24.535 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Triggering sync for uuid a010ce52-5e6a-44bd-8bc6-4151b2e1f528 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 11 00:00:24 np0005480824 nova_compute[260089]: 2025-10-11 04:00:24.536 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:24 np0005480824 nova_compute[260089]: 2025-10-11 04:00:24.536 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:24 np0005480824 nova_compute[260089]: 2025-10-11 04:00:24.536 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:24 np0005480824 nova_compute[260089]: 2025-10-11 04:00:24.537 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:24 np0005480824 nova_compute[260089]: 2025-10-11 04:00:24.537 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:24 np0005480824 nova_compute[260089]: 2025-10-11 04:00:24.538 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:00:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3319572128' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:00:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:00:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3319572128' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:00:24 np0005480824 nova_compute[260089]: 2025-10-11 04:00:24.580 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:24 np0005480824 nova_compute[260089]: 2025-10-11 04:00:24.580 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:24 np0005480824 nova_compute[260089]: 2025-10-11 04:00:24.633 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.6 MiB/s rd, 14 KiB/s wr, 53 op/s
Oct 11 00:00:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:00:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1636: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.7 MiB/s rd, 367 KiB/s wr, 61 op/s
Oct 11 00:00:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Oct 11 00:00:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Oct 11 00:00:27 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Oct 11 00:00:27 np0005480824 nova_compute[260089]: 2025-10-11 04:00:27.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:00:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:00:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:00:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:00:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:00:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:00:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_04:00:27
Oct 11 00:00:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 00:00:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 11 00:00:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['.mgr', 'backups', 'images', 'volumes', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms']
Oct 11 00:00:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 11 00:00:28 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:28Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:b8:a9 10.100.0.13
Oct 11 00:00:28 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:28Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:b8:a9 10.100.0.13
Oct 11 00:00:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 00:00:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:00:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 00:00:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:00:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:00:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:00:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:00:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:00:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:00:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.315 2 DEBUG oslo_concurrency.lockutils [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Acquiring lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.316 2 DEBUG oslo_concurrency.lockutils [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.316 2 DEBUG oslo_concurrency.lockutils [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Acquiring lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.317 2 DEBUG oslo_concurrency.lockutils [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.318 2 DEBUG oslo_concurrency.lockutils [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.319 2 INFO nova.compute.manager [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Terminating instance#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.322 2 DEBUG nova.compute.manager [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 00:00:28 np0005480824 kernel: tap6b86c387-3e (unregistering): left promiscuous mode
Oct 11 00:00:28 np0005480824 NetworkManager[44969]: <info>  [1760155228.6473] device (tap6b86c387-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:28 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:28Z|00218|binding|INFO|Releasing lport 6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 from this chassis (sb_readonly=0)
Oct 11 00:00:28 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:28Z|00219|binding|INFO|Setting lport 6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 down in Southbound
Oct 11 00:00:28 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:28Z|00220|binding|INFO|Removing iface tap6b86c387-3e ovn-installed in OVS
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:28 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:28.675 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:03:d2 10.100.0.12'], port_security=['fa:16:3e:aa:03:d2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9d89b9fc-eda1-4801-8670-e3e48a9e04ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfca432f-447a-432a-acc4-3a23e93eb8d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7e504d8715354886aaae057de71d2d5e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b1d6427b-83e7-4165-87fb-9e4a4a454ad5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf22a879-98d1-4d61-afc5-85ac70ccc880, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=6b86c387-3e59-4e3b-a7e3-e1ddfc541c50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:00:28 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:28.677 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 in datapath dfca432f-447a-432a-acc4-3a23e93eb8d6 unbound from our chassis#033[00m
Oct 11 00:00:28 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:28.679 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dfca432f-447a-432a-acc4-3a23e93eb8d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 00:00:28 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:28.679 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[a529a0d9-97c9-46ff-b68f-9e083e3d9e8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:28 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:28.680 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6 namespace which is not needed anymore#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:28 np0005480824 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct 11 00:00:28 np0005480824 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Consumed 15.281s CPU time.
Oct 11 00:00:28 np0005480824 systemd-machined[215071]: Machine qemu-23-instance-00000017 terminated.
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.791 2 INFO nova.virt.libvirt.driver [-] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Instance destroyed successfully.#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.792 2 DEBUG nova.objects.instance [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lazy-loading 'resources' on Instance uuid 9d89b9fc-eda1-4801-8670-e3e48a9e04ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.834 2 DEBUG nova.virt.libvirt.vif [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:59:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-instance-1498768136',display_name='tempest-instance-1498768136',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1498768136',id=23,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6NEZ2tGoJNT+vvNnpP4L6gc6uAsBt40LTA8EQpPfSsAFWsYjpXOMQiWw7U5ChT+0BqjZWxp2ku4qdtk+iV8mf7DOgmUJEHoiCZuHPxkdkmWxuyoiARuOt4ilG0l2yHrA==',key_name='tempest-keypair-1444996657',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:59:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7e504d8715354886aaae057de71d2d5e',ramdisk_id='',reservation_id='r-jwaks7td',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-781394803',owner_user_name='tempest-VolumesBackupsTest-781394803-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:59:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='04ab08efaee14de7b56b2514c0187402',uuid=9d89b9fc-eda1-4801-8670-e3e48a9e04ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "address": "fa:16:3e:aa:03:d2", "network": {"id": "dfca432f-447a-432a-acc4-3a23e93eb8d6", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1081477901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e504d8715354886aaae057de71d2d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86c387-3e", "ovs_interfaceid": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.835 2 DEBUG nova.network.os_vif_util [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Converting VIF {"id": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "address": "fa:16:3e:aa:03:d2", "network": {"id": "dfca432f-447a-432a-acc4-3a23e93eb8d6", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1081477901-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e504d8715354886aaae057de71d2d5e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86c387-3e", "ovs_interfaceid": "6b86c387-3e59-4e3b-a7e3-e1ddfc541c50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.837 2 DEBUG nova.network.os_vif_util [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=6b86c387-3e59-4e3b-a7e3-e1ddfc541c50,network=Network(dfca432f-447a-432a-acc4-3a23e93eb8d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86c387-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.838 2 DEBUG os_vif [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=6b86c387-3e59-4e3b-a7e3-e1ddfc541c50,network=Network(dfca432f-447a-432a-acc4-3a23e93eb8d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86c387-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.841 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b86c387-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:28 np0005480824 nova_compute[260089]: 2025-10-11 04:00:28.848 2 INFO os_vif [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=6b86c387-3e59-4e3b-a7e3-e1ddfc541c50,network=Network(dfca432f-447a-432a-acc4-3a23e93eb8d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86c387-3e')#033[00m
Oct 11 00:00:28 np0005480824 neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6[295326]: [NOTICE]   (295331) : haproxy version is 2.8.14-c23fe91
Oct 11 00:00:28 np0005480824 neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6[295326]: [NOTICE]   (295331) : path to executable is /usr/sbin/haproxy
Oct 11 00:00:28 np0005480824 neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6[295326]: [WARNING]  (295331) : Exiting Master process...
Oct 11 00:00:28 np0005480824 neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6[295326]: [ALERT]    (295331) : Current worker (295333) exited with code 143 (Terminated)
Oct 11 00:00:28 np0005480824 neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6[295326]: [WARNING]  (295331) : All workers exited. Exiting... (0)
Oct 11 00:00:28 np0005480824 systemd[1]: libpod-c29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550.scope: Deactivated successfully.
Oct 11 00:00:28 np0005480824 conmon[295326]: conmon c29dcc7326c949124886 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550.scope/container/memory.events
Oct 11 00:00:28 np0005480824 podman[296417]: 2025-10-11 04:00:28.895944463 +0000 UTC m=+0.068814324 container died c29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 00:00:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 310 KiB/s rd, 2.5 MiB/s wr, 64 op/s
Oct 11 00:00:28 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550-userdata-shm.mount: Deactivated successfully.
Oct 11 00:00:28 np0005480824 systemd[1]: var-lib-containers-storage-overlay-2f6b7bab3771db020ac393d0f6bdc1d1e7d098ee22406c55794cc2cff7dcfc19-merged.mount: Deactivated successfully.
Oct 11 00:00:28 np0005480824 podman[296417]: 2025-10-11 04:00:28.966996798 +0000 UTC m=+0.139866629 container cleanup c29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:00:28 np0005480824 systemd[1]: libpod-conmon-c29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550.scope: Deactivated successfully.
Oct 11 00:00:29 np0005480824 podman[296464]: 2025-10-11 04:00:29.059914195 +0000 UTC m=+0.053844959 container remove c29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 00:00:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:29.069 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c4542a57-b495-4fd9-8bca-695d07004e91]: (4, ('Sat Oct 11 04:00:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6 (c29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550)\nc29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550\nSat Oct 11 04:00:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6 (c29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550)\nc29dcc7326c94912488612d05e8f64d125525905659ef4a73ce4518d9c310550\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:29.071 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a1364f-7355-4c11-8c55-747e501d8c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:29.072 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfca432f-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.076 2 DEBUG nova.compute.manager [req-30e25178-011c-4bb8-8746-0bef41e98507 req-653327bd-179e-4661-987e-67301ef7b5b1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Received event network-vif-unplugged-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.076 2 DEBUG oslo_concurrency.lockutils [req-30e25178-011c-4bb8-8746-0bef41e98507 req-653327bd-179e-4661-987e-67301ef7b5b1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:29 np0005480824 kernel: tapdfca432f-40: left promiscuous mode
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.077 2 DEBUG oslo_concurrency.lockutils [req-30e25178-011c-4bb8-8746-0bef41e98507 req-653327bd-179e-4661-987e-67301ef7b5b1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.078 2 DEBUG oslo_concurrency.lockutils [req-30e25178-011c-4bb8-8746-0bef41e98507 req-653327bd-179e-4661-987e-67301ef7b5b1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.078 2 DEBUG nova.compute.manager [req-30e25178-011c-4bb8-8746-0bef41e98507 req-653327bd-179e-4661-987e-67301ef7b5b1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] No waiting events found dispatching network-vif-unplugged-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.078 2 DEBUG nova.compute.manager [req-30e25178-011c-4bb8-8746-0bef41e98507 req-653327bd-179e-4661-987e-67301ef7b5b1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Received event network-vif-unplugged-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:29.099 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4d848b-dd7b-4dc9-b200-0ecdeda0f3f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:29.136 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[0a38931a-9a80-4755-b117-d11098ceb061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:29.137 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2650e66c-41e5-42c1-a878-9db58179e969]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:29.153 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[af728666-0cb5-48c9-bb47-d495b2676d98]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466018, 'reachable_time': 30564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296480, 'error': None, 'target': 'ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:29.157 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dfca432f-447a-432a-acc4-3a23e93eb8d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 00:00:29 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:29.157 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[25b084d7-5948-4d7b-b586-23f8111e914a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:29 np0005480824 systemd[1]: run-netns-ovnmeta\x2ddfca432f\x2d447a\x2d432a\x2dacc4\x2d3a23e93eb8d6.mount: Deactivated successfully.
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Oct 11 00:00:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Oct 11 00:00:29 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.464 2 INFO nova.virt.libvirt.driver [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Deleting instance files /var/lib/nova/instances/9d89b9fc-eda1-4801-8670-e3e48a9e04ae_del#033[00m
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.465 2 INFO nova.virt.libvirt.driver [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Deletion of /var/lib/nova/instances/9d89b9fc-eda1-4801-8670-e3e48a9e04ae_del complete#033[00m
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.534 2 INFO nova.compute.manager [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Took 1.21 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.535 2 DEBUG oslo.service.loopingcall [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.535 2 DEBUG nova.compute.manager [-] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 00:00:29 np0005480824 nova_compute[260089]: 2025-10-11 04:00:29.536 2 DEBUG nova.network.neutron [-] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 00:00:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 00:00:29 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 24K writes, 91K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 24K writes, 8824 syncs, 2.82 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 42K keys, 12K commit groups, 1.0 writes per commit group, ingest: 32.32 MB, 0.05 MB/s#012Interval WAL: 12K writes, 5318 syncs, 2.43 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 00:00:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:00:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 386 KiB/s rd, 3.2 MiB/s wr, 79 op/s
Oct 11 00:00:31 np0005480824 nova_compute[260089]: 2025-10-11 04:00:31.178 2 DEBUG nova.compute.manager [req-d5b31c1d-f333-4e2a-b7e5-fa72e301c976 req-a7d53a2f-1028-488c-9f79-e2772f5dd046 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Received event network-vif-plugged-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:00:31 np0005480824 nova_compute[260089]: 2025-10-11 04:00:31.179 2 DEBUG oslo_concurrency.lockutils [req-d5b31c1d-f333-4e2a-b7e5-fa72e301c976 req-a7d53a2f-1028-488c-9f79-e2772f5dd046 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:31 np0005480824 nova_compute[260089]: 2025-10-11 04:00:31.179 2 DEBUG oslo_concurrency.lockutils [req-d5b31c1d-f333-4e2a-b7e5-fa72e301c976 req-a7d53a2f-1028-488c-9f79-e2772f5dd046 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:31 np0005480824 nova_compute[260089]: 2025-10-11 04:00:31.180 2 DEBUG oslo_concurrency.lockutils [req-d5b31c1d-f333-4e2a-b7e5-fa72e301c976 req-a7d53a2f-1028-488c-9f79-e2772f5dd046 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:31 np0005480824 nova_compute[260089]: 2025-10-11 04:00:31.180 2 DEBUG nova.compute.manager [req-d5b31c1d-f333-4e2a-b7e5-fa72e301c976 req-a7d53a2f-1028-488c-9f79-e2772f5dd046 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] No waiting events found dispatching network-vif-plugged-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:00:31 np0005480824 nova_compute[260089]: 2025-10-11 04:00:31.180 2 WARNING nova.compute.manager [req-d5b31c1d-f333-4e2a-b7e5-fa72e301c976 req-a7d53a2f-1028-488c-9f79-e2772f5dd046 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Received unexpected event network-vif-plugged-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 for instance with vm_state active and task_state deleting.#033[00m
Oct 11 00:00:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Oct 11 00:00:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Oct 11 00:00:31 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Oct 11 00:00:31 np0005480824 nova_compute[260089]: 2025-10-11 04:00:31.748 2 DEBUG nova.network.neutron [-] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:00:31 np0005480824 nova_compute[260089]: 2025-10-11 04:00:31.768 2 INFO nova.compute.manager [-] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Took 2.23 seconds to deallocate network for instance.#033[00m
Oct 11 00:00:31 np0005480824 nova_compute[260089]: 2025-10-11 04:00:31.834 2 DEBUG nova.compute.manager [req-416e639c-8412-4189-ac63-043563d61f44 req-e7dc0750-0370-4123-b20e-3301010f6190 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Received event network-vif-deleted-6b86c387-3e59-4e3b-a7e3-e1ddfc541c50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:00:31 np0005480824 nova_compute[260089]: 2025-10-11 04:00:31.923 2 INFO nova.compute.manager [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Took 0.16 seconds to detach 1 volumes for instance.#033[00m
Oct 11 00:00:31 np0005480824 nova_compute[260089]: 2025-10-11 04:00:31.971 2 DEBUG oslo_concurrency.lockutils [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:31 np0005480824 nova_compute[260089]: 2025-10-11 04:00:31.972 2 DEBUG oslo_concurrency.lockutils [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.086 2 DEBUG oslo_concurrency.processutils [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:00:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4007866793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.548 2 DEBUG oslo_concurrency.processutils [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.558 2 DEBUG nova.compute.provider_tree [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.580 2 DEBUG nova.scheduler.client.report [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.607 2 DEBUG oslo_concurrency.lockutils [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.637 2 INFO nova.scheduler.client.report [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Deleted allocations for instance 9d89b9fc-eda1-4801-8670-e3e48a9e04ae#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.716 2 DEBUG oslo_concurrency.lockutils [None req-72e007a4-6b64-4af2-95c6-1e42237ea210 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.717 2 DEBUG oslo_concurrency.lockutils [None req-72e007a4-6b64-4af2-95c6-1e42237ea210 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.735 2 INFO nova.compute.manager [None req-72e007a4-6b64-4af2-95c6-1e42237ea210 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Detaching volume 0852afeb-a7b4-4c98-a5f1-0f78ce361a5d#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.740 2 DEBUG oslo_concurrency.lockutils [None req-34272eeb-f56a-4999-94b7-ffdc52339b29 04ab08efaee14de7b56b2514c0187402 7e504d8715354886aaae057de71d2d5e - - default default] Lock "9d89b9fc-eda1-4801-8670-e3e48a9e04ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.873 2 INFO nova.virt.block_device [None req-72e007a4-6b64-4af2-95c6-1e42237ea210 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Attempting to driver detach volume 0852afeb-a7b4-4c98-a5f1-0f78ce361a5d from mountpoint /dev/vdb#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.887 2 DEBUG nova.virt.libvirt.driver [None req-72e007a4-6b64-4af2-95c6-1e42237ea210 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Attempting to detach device vdb from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.888 2 DEBUG nova.virt.libvirt.guest [None req-72e007a4-6b64-4af2-95c6-1e42237ea210 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 00:00:32 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:00:32 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-0852afeb-a7b4-4c98-a5f1-0f78ce361a5d">
Oct 11 00:00:32 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:32 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:00:32 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:00:32 np0005480824 nova_compute[260089]:  <serial>0852afeb-a7b4-4c98-a5f1-0f78ce361a5d</serial>
Oct 11 00:00:32 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 00:00:32 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:00:32 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.899 2 INFO nova.virt.libvirt.driver [None req-72e007a4-6b64-4af2-95c6-1e42237ea210 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Successfully detached device vdb from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the persistent domain config.#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.900 2 DEBUG nova.virt.libvirt.driver [None req-72e007a4-6b64-4af2-95c6-1e42237ea210 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 11 00:00:32 np0005480824 nova_compute[260089]: 2025-10-11 04:00:32.901 2 DEBUG nova.virt.libvirt.guest [None req-72e007a4-6b64-4af2-95c6-1e42237ea210 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 00:00:32 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:00:32 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-0852afeb-a7b4-4c98-a5f1-0f78ce361a5d">
Oct 11 00:00:32 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:32 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:00:32 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:00:32 np0005480824 nova_compute[260089]:  <serial>0852afeb-a7b4-4c98-a5f1-0f78ce361a5d</serial>
Oct 11 00:00:32 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 00:00:32 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:00:32 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 00:00:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1642: 321 pgs: 321 active+clean; 2.5 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 919 KiB/s rd, 11 MiB/s wr, 281 op/s
Oct 11 00:00:33 np0005480824 nova_compute[260089]: 2025-10-11 04:00:33.032 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Received event <DeviceRemovedEvent: 1760155233.0318515, d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 11 00:00:33 np0005480824 nova_compute[260089]: 2025-10-11 04:00:33.034 2 DEBUG nova.virt.libvirt.driver [None req-72e007a4-6b64-4af2-95c6-1e42237ea210 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 11 00:00:33 np0005480824 nova_compute[260089]: 2025-10-11 04:00:33.036 2 INFO nova.virt.libvirt.driver [None req-72e007a4-6b64-4af2-95c6-1e42237ea210 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Successfully detached device vdb from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the live domain config.#033[00m
Oct 11 00:00:33 np0005480824 nova_compute[260089]: 2025-10-11 04:00:33.198 2 DEBUG nova.objects.instance [None req-72e007a4-6b64-4af2-95c6-1e42237ea210 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'flavor' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:33 np0005480824 nova_compute[260089]: 2025-10-11 04:00:33.230 2 DEBUG oslo_concurrency.lockutils [None req-72e007a4-6b64-4af2-95c6-1e42237ea210 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:00:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1132328234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:00:33 np0005480824 nova_compute[260089]: 2025-10-11 04:00:33.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.193 2 DEBUG oslo_concurrency.lockutils [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.194 2 DEBUG oslo_concurrency.lockutils [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.194 2 DEBUG oslo_concurrency.lockutils [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.195 2 DEBUG oslo_concurrency.lockutils [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.195 2 DEBUG oslo_concurrency.lockutils [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.196 2 INFO nova.compute.manager [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Terminating instance#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.197 2 DEBUG nova.compute.manager [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 00:00:34 np0005480824 kernel: tapb09e90b7-14 (unregistering): left promiscuous mode
Oct 11 00:00:34 np0005480824 NetworkManager[44969]: <info>  [1760155234.2514] device (tapb09e90b7-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:34 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:34Z|00221|binding|INFO|Releasing lport b09e90b7-14bf-425e-bbd7-78f4c2dea771 from this chassis (sb_readonly=0)
Oct 11 00:00:34 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:34Z|00222|binding|INFO|Setting lport b09e90b7-14bf-425e-bbd7-78f4c2dea771 down in Southbound
Oct 11 00:00:34 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:34Z|00223|binding|INFO|Removing iface tapb09e90b7-14 ovn-installed in OVS
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:34.270 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:b8:a9 10.100.0.13'], port_security=['fa:16:3e:02:b8:a9 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a010ce52-5e6a-44bd-8bc6-4151b2e1f528', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e73ded2f2ee46b4a7485c01ef1b73e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b966caac-3def-4c2a-badc-a92b0de92fd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.228'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f608fb9-f693-4a11-9617-6172f3d025df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=b09e90b7-14bf-425e-bbd7-78f4c2dea771) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:00:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:34.273 162245 INFO neutron.agent.ovn.metadata.agent [-] Port b09e90b7-14bf-425e-bbd7-78f4c2dea771 in datapath 15a62ee0-8e34-4e49-990e-246b4ef9e0c6 unbound from our chassis#033[00m
Oct 11 00:00:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:34.276 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15a62ee0-8e34-4e49-990e-246b4ef9e0c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 00:00:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:34.278 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[48470d1d-462c-4b69-b239-e68955509f93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:34.279 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 namespace which is not needed anymore#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:34 np0005480824 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct 11 00:00:34 np0005480824 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Consumed 20.978s CPU time.
Oct 11 00:00:34 np0005480824 systemd-machined[215071]: Machine qemu-24-instance-00000018 terminated.
Oct 11 00:00:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Oct 11 00:00:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Oct 11 00:00:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.437 2 INFO nova.virt.libvirt.driver [-] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Instance destroyed successfully.#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.438 2 DEBUG nova.objects.instance [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lazy-loading 'resources' on Instance uuid a010ce52-5e6a-44bd-8bc6-4151b2e1f528 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.451 2 DEBUG nova.virt.libvirt.vif [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:59:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2104860261',display_name='tempest-TransferEncryptedVolumeTest-server-2104860261',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2104860261',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSd/imVrPUoZZ0pPNaeX2vqRyFwUZkYkGtRIGLvkZ+JmbpGCVAFlpb2xMevRN2guCRk7QItwPxlNbBPPCGkv6m7D9V9P6ik2vYr9GNZ8E+yfq+aSt3aD3tvswV1nTE1Iw==',key_name='tempest-TransferEncryptedVolumeTest-1221687079',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:00:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0e73ded2f2ee46b4a7485c01ef1b73e9',ramdisk_id='',reservation_id='r-yylaj4ah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1815435088',owner_user_name='tempest-TransferEncryptedVolumeTest-1815435088-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:00:12Z,user_data=None,user_id='eccc3f574d354840901d28dad2488bf4',uuid=a010ce52-5e6a-44bd-8bc6-4151b2e1f528,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "address": "fa:16:3e:02:b8:a9", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb09e90b7-14", "ovs_interfaceid": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.451 2 DEBUG nova.network.os_vif_util [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converting VIF {"id": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "address": "fa:16:3e:02:b8:a9", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb09e90b7-14", "ovs_interfaceid": "b09e90b7-14bf-425e-bbd7-78f4c2dea771", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.452 2 DEBUG nova.network.os_vif_util [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:b8:a9,bridge_name='br-int',has_traffic_filtering=True,id=b09e90b7-14bf-425e-bbd7-78f4c2dea771,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb09e90b7-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.452 2 DEBUG os_vif [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:b8:a9,bridge_name='br-int',has_traffic_filtering=True,id=b09e90b7-14bf-425e-bbd7-78f4c2dea771,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb09e90b7-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.454 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb09e90b7-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.458 2 INFO os_vif [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:b8:a9,bridge_name='br-int',has_traffic_filtering=True,id=b09e90b7-14bf-425e-bbd7-78f4c2dea771,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb09e90b7-14')#033[00m
Oct 11 00:00:34 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[296290]: [NOTICE]   (296294) : haproxy version is 2.8.14-c23fe91
Oct 11 00:00:34 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[296290]: [NOTICE]   (296294) : path to executable is /usr/sbin/haproxy
Oct 11 00:00:34 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[296290]: [ALERT]    (296294) : Current worker (296296) exited with code 143 (Terminated)
Oct 11 00:00:34 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[296290]: [WARNING]  (296294) : All workers exited. Exiting... (0)
Oct 11 00:00:34 np0005480824 systemd[1]: libpod-687b8dd7f44eb77dd4d450a9764ad12d05de0dd72150ccf3a8bd1fe494cfc387.scope: Deactivated successfully.
Oct 11 00:00:34 np0005480824 podman[296528]: 2025-10-11 04:00:34.495025184 +0000 UTC m=+0.083304628 container died 687b8dd7f44eb77dd4d450a9764ad12d05de0dd72150ccf3a8bd1fe494cfc387 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.505 2 DEBUG nova.compute.manager [req-d1697ce3-e077-4981-871d-8be5cc584f29 req-8028df86-f5cc-48e8-9c58-9dab0eac4778 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Received event network-vif-unplugged-b09e90b7-14bf-425e-bbd7-78f4c2dea771 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.505 2 DEBUG oslo_concurrency.lockutils [req-d1697ce3-e077-4981-871d-8be5cc584f29 req-8028df86-f5cc-48e8-9c58-9dab0eac4778 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.505 2 DEBUG oslo_concurrency.lockutils [req-d1697ce3-e077-4981-871d-8be5cc584f29 req-8028df86-f5cc-48e8-9c58-9dab0eac4778 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.506 2 DEBUG oslo_concurrency.lockutils [req-d1697ce3-e077-4981-871d-8be5cc584f29 req-8028df86-f5cc-48e8-9c58-9dab0eac4778 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.506 2 DEBUG nova.compute.manager [req-d1697ce3-e077-4981-871d-8be5cc584f29 req-8028df86-f5cc-48e8-9c58-9dab0eac4778 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] No waiting events found dispatching network-vif-unplugged-b09e90b7-14bf-425e-bbd7-78f4c2dea771 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.506 2 DEBUG nova.compute.manager [req-d1697ce3-e077-4981-871d-8be5cc584f29 req-8028df86-f5cc-48e8-9c58-9dab0eac4778 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Received event network-vif-unplugged-b09e90b7-14bf-425e-bbd7-78f4c2dea771 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 00:00:34 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-687b8dd7f44eb77dd4d450a9764ad12d05de0dd72150ccf3a8bd1fe494cfc387-userdata-shm.mount: Deactivated successfully.
Oct 11 00:00:34 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9673b32ee80085034a10a30c3e27bd8fa029316af542a221f73f67cb6dc11ee4-merged.mount: Deactivated successfully.
Oct 11 00:00:34 np0005480824 podman[296528]: 2025-10-11 04:00:34.529561629 +0000 UTC m=+0.117841033 container cleanup 687b8dd7f44eb77dd4d450a9764ad12d05de0dd72150ccf3a8bd1fe494cfc387 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 00:00:34 np0005480824 systemd[1]: libpod-conmon-687b8dd7f44eb77dd4d450a9764ad12d05de0dd72150ccf3a8bd1fe494cfc387.scope: Deactivated successfully.
Oct 11 00:00:34 np0005480824 podman[296587]: 2025-10-11 04:00:34.609076018 +0000 UTC m=+0.055704782 container remove 687b8dd7f44eb77dd4d450a9764ad12d05de0dd72150ccf3a8bd1fe494cfc387 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:00:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:34.619 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[6432d371-20aa-471a-97e9-30ee78dc9f5a]: (4, ('Sat Oct 11 04:00:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 (687b8dd7f44eb77dd4d450a9764ad12d05de0dd72150ccf3a8bd1fe494cfc387)\n687b8dd7f44eb77dd4d450a9764ad12d05de0dd72150ccf3a8bd1fe494cfc387\nSat Oct 11 04:00:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 (687b8dd7f44eb77dd4d450a9764ad12d05de0dd72150ccf3a8bd1fe494cfc387)\n687b8dd7f44eb77dd4d450a9764ad12d05de0dd72150ccf3a8bd1fe494cfc387\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:34.621 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ff51d53e-56ca-4cac-b1b6-0a7ce6d8b84e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:34.623 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15a62ee0-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:34 np0005480824 kernel: tap15a62ee0-80: left promiscuous mode
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:34.662 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d80aa660-b913-4810-ad4d-5ac53485980b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:34.689 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[317c377c-640b-44ff-b483-7f3f2b1c1886]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:34.690 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4678cd13-3f76-4cfb-b798-a761b79f6d7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:34.707 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[55470dd3-7cff-4aaf-977e-cc8baaa59dfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467599, 'reachable_time': 25775, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296602, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:34.710 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 00:00:34 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:34.710 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[d9422269-52fc-43cd-9e04-b72aa4d4fe53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:34 np0005480824 systemd[1]: run-netns-ovnmeta\x2d15a62ee0\x2d8e34\x2d4e49\x2d990e\x2d246b4ef9e0c6.mount: Deactivated successfully.
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.776 2 INFO nova.virt.libvirt.driver [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Deleting instance files /var/lib/nova/instances/a010ce52-5e6a-44bd-8bc6-4151b2e1f528_del#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.778 2 INFO nova.virt.libvirt.driver [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Deletion of /var/lib/nova/instances/a010ce52-5e6a-44bd-8bc6-4151b2e1f528_del complete#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.840 2 INFO nova.compute.manager [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.841 2 DEBUG oslo.service.loopingcall [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.841 2 DEBUG nova.compute.manager [-] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 00:00:34 np0005480824 nova_compute[260089]: 2025-10-11 04:00:34.841 2 DEBUG nova.network.neutron [-] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 00:00:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 2.5 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 563 KiB/s rd, 7.6 MiB/s wr, 191 op/s
Oct 11 00:00:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Oct 11 00:00:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Oct 11 00:00:35 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Oct 11 00:00:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 00:00:35 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 25K writes, 94K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 25K writes, 8865 syncs, 2.85 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 40K keys, 12K commit groups, 1.0 writes per commit group, ingest: 32.76 MB, 0.05 MB/s#012Interval WAL: 12K writes, 5021 syncs, 2.44 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 00:00:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:00:35 np0005480824 nova_compute[260089]: 2025-10-11 04:00:35.631 2 DEBUG nova.network.neutron [-] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:00:35 np0005480824 nova_compute[260089]: 2025-10-11 04:00:35.657 2 INFO nova.compute.manager [-] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Took 0.82 seconds to deallocate network for instance.#033[00m
Oct 11 00:00:35 np0005480824 nova_compute[260089]: 2025-10-11 04:00:35.830 2 INFO nova.compute.manager [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Took 0.17 seconds to detach 1 volumes for instance.#033[00m
Oct 11 00:00:35 np0005480824 nova_compute[260089]: 2025-10-11 04:00:35.881 2 DEBUG oslo_concurrency.lockutils [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:35 np0005480824 nova_compute[260089]: 2025-10-11 04:00:35.881 2 DEBUG oslo_concurrency.lockutils [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:35 np0005480824 nova_compute[260089]: 2025-10-11 04:00:35.950 2 DEBUG oslo_concurrency.processutils [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.010 2 DEBUG oslo_concurrency.lockutils [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.011 2 DEBUG oslo_concurrency.lockutils [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.051 2 DEBUG nova.objects.instance [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'flavor' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.086 2 DEBUG oslo_concurrency.lockutils [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.276 2 DEBUG oslo_concurrency.lockutils [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.276 2 DEBUG oslo_concurrency.lockutils [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.277 2 INFO nova.compute.manager [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Attaching volume 80fb7615-f63c-4619-bea1-618b5c09c394 to /dev/vdb#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.342 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.393 2 DEBUG os_brick.utils [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.394 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.414 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.415 676 DEBUG oslo.privsep.daemon [-] privsep: reply[75833dad-f1ad-4a6c-8aae-454925f4eb41]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.418 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:00:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/135192344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.434 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.435 676 DEBUG oslo.privsep.daemon [-] privsep: reply[c669102b-7c0c-47cd-9b62-24e201899114]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.437 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.440 2 DEBUG oslo_concurrency.processutils [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.447 2 DEBUG nova.compute.provider_tree [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.454 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.454 676 DEBUG oslo.privsep.daemon [-] privsep: reply[34cbb1a7-840b-4a23-a1a2-e210719f87f8]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.457 676 DEBUG oslo.privsep.daemon [-] privsep: reply[c98a7409-a844-451e-95b4-d4b8f75c9536]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.458 2 DEBUG oslo_concurrency.processutils [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.485 2 DEBUG nova.scheduler.client.report [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.492 2 DEBUG oslo_concurrency.processutils [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.495 2 DEBUG os_brick.initiator.connectors.lightos [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.495 2 DEBUG os_brick.initiator.connectors.lightos [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.495 2 DEBUG os_brick.initiator.connectors.lightos [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.496 2 DEBUG os_brick.utils [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] <== get_connector_properties: return (102ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.496 2 DEBUG nova.virt.block_device [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Updating existing volume attachment record: be6c077c-f95f-420a-93e9-207bb41840e7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.515 2 DEBUG oslo_concurrency.lockutils [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.540 2 INFO nova.scheduler.client.report [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Deleted allocations for instance a010ce52-5e6a-44bd-8bc6-4151b2e1f528#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.614 2 DEBUG nova.compute.manager [req-593e0665-1f94-4633-9ffc-8917203a02a2 req-ba7b30a9-93f7-4d17-8f33-493b46fa53ef 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Received event network-vif-plugged-b09e90b7-14bf-425e-bbd7-78f4c2dea771 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.614 2 DEBUG oslo_concurrency.lockutils [req-593e0665-1f94-4633-9ffc-8917203a02a2 req-ba7b30a9-93f7-4d17-8f33-493b46fa53ef 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.615 2 DEBUG oslo_concurrency.lockutils [req-593e0665-1f94-4633-9ffc-8917203a02a2 req-ba7b30a9-93f7-4d17-8f33-493b46fa53ef 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.615 2 DEBUG oslo_concurrency.lockutils [req-593e0665-1f94-4633-9ffc-8917203a02a2 req-ba7b30a9-93f7-4d17-8f33-493b46fa53ef 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.615 2 DEBUG nova.compute.manager [req-593e0665-1f94-4633-9ffc-8917203a02a2 req-ba7b30a9-93f7-4d17-8f33-493b46fa53ef 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] No waiting events found dispatching network-vif-plugged-b09e90b7-14bf-425e-bbd7-78f4c2dea771 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.615 2 WARNING nova.compute.manager [req-593e0665-1f94-4633-9ffc-8917203a02a2 req-ba7b30a9-93f7-4d17-8f33-493b46fa53ef 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Received unexpected event network-vif-plugged-b09e90b7-14bf-425e-bbd7-78f4c2dea771 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.616 2 DEBUG nova.compute.manager [req-593e0665-1f94-4633-9ffc-8917203a02a2 req-ba7b30a9-93f7-4d17-8f33-493b46fa53ef 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Received event network-vif-deleted-b09e90b7-14bf-425e-bbd7-78f4c2dea771 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:00:36 np0005480824 nova_compute[260089]: 2025-10-11 04:00:36.619 2 DEBUG oslo_concurrency.lockutils [None req-1940b7a3-7931-44b3-9e88-b1b837bd1ccc eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "a010ce52-5e6a-44bd-8bc6-4151b2e1f528" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 2.5 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 738 KiB/s rd, 9.9 MiB/s wr, 220 op/s
Oct 11 00:00:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:00:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3258159695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:00:37 np0005480824 nova_compute[260089]: 2025-10-11 04:00:37.252 2 DEBUG nova.objects.instance [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'flavor' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:37 np0005480824 nova_compute[260089]: 2025-10-11 04:00:37.284 2 DEBUG nova.virt.libvirt.driver [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Attempting to attach volume 80fb7615-f63c-4619-bea1-618b5c09c394 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct 11 00:00:37 np0005480824 nova_compute[260089]: 2025-10-11 04:00:37.289 2 DEBUG nova.virt.libvirt.guest [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 00:00:37 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:00:37 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-80fb7615-f63c-4619-bea1-618b5c09c394">
Oct 11 00:00:37 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:37 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:00:37 np0005480824 nova_compute[260089]:  <auth username="openstack">
Oct 11 00:00:37 np0005480824 nova_compute[260089]:    <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:00:37 np0005480824 nova_compute[260089]:  </auth>
Oct 11 00:00:37 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:00:37 np0005480824 nova_compute[260089]:  <serial>80fb7615-f63c-4619-bea1-618b5c09c394</serial>
Oct 11 00:00:37 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:00:37 np0005480824 nova_compute[260089]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 11 00:00:37 np0005480824 nova_compute[260089]: 2025-10-11 04:00:37.291 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:00:37 np0005480824 nova_compute[260089]: 2025-10-11 04:00:37.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:00:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Oct 11 00:00:37 np0005480824 nova_compute[260089]: 2025-10-11 04:00:37.499 2 DEBUG nova.virt.libvirt.driver [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:37 np0005480824 nova_compute[260089]: 2025-10-11 04:00:37.500 2 DEBUG nova.virt.libvirt.driver [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:37 np0005480824 nova_compute[260089]: 2025-10-11 04:00:37.501 2 DEBUG nova.virt.libvirt.driver [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:37 np0005480824 nova_compute[260089]: 2025-10-11 04:00:37.501 2 DEBUG nova.virt.libvirt.driver [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No VIF found with MAC fa:16:3e:91:5e:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 00:00:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Oct 11 00:00:37 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Oct 11 00:00:37 np0005480824 nova_compute[260089]: 2025-10-11 04:00:37.715 2 DEBUG oslo_concurrency.lockutils [None req-13e9102e-2141-4176-96f8-64e8479cc069 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:38 np0005480824 podman[296654]: 2025-10-11 04:00:38.055410224 +0000 UTC m=+0.107370541 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 00:00:38 np0005480824 podman[296655]: 2025-10-11 04:00:38.066669643 +0000 UTC m=+0.104463995 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007615623168587615 of space, bias 1.0, pg target 0.22846869505762843 quantized to 32 (current 32)
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.037007184732532394 of space, bias 1.0, pg target 11.102155419759718 quantized to 32 (current 32)
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0005974982906817738 of space, bias 1.0, pg target 0.17267700600703262 quantized to 32 (current 32)
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1925249377319789 quantized to 32 (current 32)
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0005880868659243341 quantized to 16 (current 32)
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.351085824054176e-05 quantized to 32 (current 32)
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006248422950446051 quantized to 32 (current 32)
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014702171648108353 quantized to 32 (current 32)
Oct 11 00:00:38 np0005480824 nova_compute[260089]: 2025-10-11 04:00:38.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:00:38 np0005480824 nova_compute[260089]: 2025-10-11 04:00:38.328 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:38 np0005480824 nova_compute[260089]: 2025-10-11 04:00:38.328 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:38 np0005480824 nova_compute[260089]: 2025-10-11 04:00:38.329 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:38 np0005480824 nova_compute[260089]: 2025-10-11 04:00:38.329 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 00:00:38 np0005480824 nova_compute[260089]: 2025-10-11 04:00:38.329 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:00:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147668875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:00:38 np0005480824 nova_compute[260089]: 2025-10-11 04:00:38.783 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:38 np0005480824 nova_compute[260089]: 2025-10-11 04:00:38.868 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 00:00:38 np0005480824 nova_compute[260089]: 2025-10-11 04:00:38.868 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 00:00:38 np0005480824 nova_compute[260089]: 2025-10-11 04:00:38.869 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 00:00:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 2.5 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.103 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.104 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4142MB free_disk=59.94247817993164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.104 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.105 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.184 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.185 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.186 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.216 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:00:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2603047470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.716 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.722 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.744 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.776 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 00:00:39 np0005480824 nova_compute[260089]: 2025-10-11 04:00:39.777 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:40 np0005480824 nova_compute[260089]: 2025-10-11 04:00:40.249 2 DEBUG oslo_concurrency.lockutils [None req-db713d72-ac52-41b7-8579-b6fedb29d4f0 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:40 np0005480824 nova_compute[260089]: 2025-10-11 04:00:40.250 2 DEBUG oslo_concurrency.lockutils [None req-db713d72-ac52-41b7-8579-b6fedb29d4f0 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:40 np0005480824 nova_compute[260089]: 2025-10-11 04:00:40.292 2 INFO nova.compute.manager [None req-db713d72-ac52-41b7-8579-b6fedb29d4f0 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Detaching volume 80fb7615-f63c-4619-bea1-618b5c09c394#033[00m
Oct 11 00:00:40 np0005480824 nova_compute[260089]: 2025-10-11 04:00:40.408 2 INFO nova.virt.block_device [None req-db713d72-ac52-41b7-8579-b6fedb29d4f0 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Attempting to driver detach volume 80fb7615-f63c-4619-bea1-618b5c09c394 from mountpoint /dev/vdb#033[00m
Oct 11 00:00:40 np0005480824 nova_compute[260089]: 2025-10-11 04:00:40.416 2 DEBUG nova.virt.libvirt.driver [None req-db713d72-ac52-41b7-8579-b6fedb29d4f0 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Attempting to detach device vdb from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 11 00:00:40 np0005480824 nova_compute[260089]: 2025-10-11 04:00:40.417 2 DEBUG nova.virt.libvirt.guest [None req-db713d72-ac52-41b7-8579-b6fedb29d4f0 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 00:00:40 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:00:40 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-80fb7615-f63c-4619-bea1-618b5c09c394">
Oct 11 00:00:40 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:40 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:00:40 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:00:40 np0005480824 nova_compute[260089]:  <serial>80fb7615-f63c-4619-bea1-618b5c09c394</serial>
Oct 11 00:00:40 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 00:00:40 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:00:40 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 00:00:40 np0005480824 nova_compute[260089]: 2025-10-11 04:00:40.424 2 INFO nova.virt.libvirt.driver [None req-db713d72-ac52-41b7-8579-b6fedb29d4f0 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Successfully detached device vdb from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the persistent domain config.#033[00m
Oct 11 00:00:40 np0005480824 nova_compute[260089]: 2025-10-11 04:00:40.425 2 DEBUG nova.virt.libvirt.driver [None req-db713d72-ac52-41b7-8579-b6fedb29d4f0 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 11 00:00:40 np0005480824 nova_compute[260089]: 2025-10-11 04:00:40.426 2 DEBUG nova.virt.libvirt.guest [None req-db713d72-ac52-41b7-8579-b6fedb29d4f0 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 00:00:40 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:00:40 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-80fb7615-f63c-4619-bea1-618b5c09c394">
Oct 11 00:00:40 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:40 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:00:40 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:00:40 np0005480824 nova_compute[260089]:  <serial>80fb7615-f63c-4619-bea1-618b5c09c394</serial>
Oct 11 00:00:40 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 00:00:40 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:00:40 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 00:00:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 00:00:40 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 19K writes, 76K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 19K writes, 6710 syncs, 2.94 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9085 writes, 31K keys, 9085 commit groups, 1.0 writes per commit group, ingest: 26.52 MB, 0.04 MB/s#012Interval WAL: 9085 writes, 3704 syncs, 2.45 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 00:00:40 np0005480824 nova_compute[260089]: 2025-10-11 04:00:40.536 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Received event <DeviceRemovedEvent: 1760155240.5362232, d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 11 00:00:40 np0005480824 nova_compute[260089]: 2025-10-11 04:00:40.538 2 DEBUG nova.virt.libvirt.driver [None req-db713d72-ac52-41b7-8579-b6fedb29d4f0 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 11 00:00:40 np0005480824 nova_compute[260089]: 2025-10-11 04:00:40.540 2 INFO nova.virt.libvirt.driver [None req-db713d72-ac52-41b7-8579-b6fedb29d4f0 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Successfully detached device vdb from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the live domain config.#033[00m
Oct 11 00:00:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Oct 11 00:00:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Oct 11 00:00:40 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Oct 11 00:00:40 np0005480824 nova_compute[260089]: 2025-10-11 04:00:40.722 2 DEBUG nova.objects.instance [None req-db713d72-ac52-41b7-8579-b6fedb29d4f0 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'flavor' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:40 np0005480824 nova_compute[260089]: 2025-10-11 04:00:40.772 2 DEBUG oslo_concurrency.lockutils [None req-db713d72-ac52-41b7-8579-b6fedb29d4f0 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 2.5 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.3 MiB/s wr, 190 op/s
Oct 11 00:00:41 np0005480824 nova_compute[260089]: 2025-10-11 04:00:41.779 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:00:41 np0005480824 nova_compute[260089]: 2025-10-11 04:00:41.780 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 00:00:41 np0005480824 nova_compute[260089]: 2025-10-11 04:00:41.780 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 00:00:41 np0005480824 nova_compute[260089]: 2025-10-11 04:00:41.927 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "refresh_cache-d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:00:41 np0005480824 nova_compute[260089]: 2025-10-11 04:00:41.927 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquired lock "refresh_cache-d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:00:41 np0005480824 nova_compute[260089]: 2025-10-11 04:00:41.927 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 00:00:41 np0005480824 nova_compute[260089]: 2025-10-11 04:00:41.928 2 DEBUG nova.objects.instance [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 226 op/s
Oct 11 00:00:42 np0005480824 ceph-mgr[74617]: [devicehealth INFO root] Check health
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.144 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Updating instance_info_cache with network_info: [{"id": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "address": "fa:16:3e:91:5e:e0", "network": {"id": "b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-707028039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dd4975fff494ac1b725d3dfb95c6006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfcdfd4b-fc", "ovs_interfaceid": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.166 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Releasing lock "refresh_cache-d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.167 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.167 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.167 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.168 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.168 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.434 2 DEBUG oslo_concurrency.lockutils [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.435 2 DEBUG oslo_concurrency.lockutils [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.451 2 DEBUG nova.objects.instance [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'flavor' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.492 2 DEBUG oslo_concurrency.lockutils [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.706 2 DEBUG oslo_concurrency.lockutils [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.707 2 DEBUG oslo_concurrency.lockutils [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.707 2 INFO nova.compute.manager [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Attaching volume a3d53ef3-723d-48ea-99be-18168e13b35b to /dev/vdb#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.789 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155228.7881114, 9d89b9fc-eda1-4801-8670-e3e48a9e04ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.789 2 INFO nova.compute.manager [-] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] VM Stopped (Lifecycle Event)#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.813 2 DEBUG nova.compute.manager [None req-c39c0ee7-ab7d-4fef-a3a1-eebdb8a57a78 - - - - - -] [instance: 9d89b9fc-eda1-4801-8670-e3e48a9e04ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.857 2 DEBUG os_brick.utils [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.863 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.883 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.883 676 DEBUG oslo.privsep.daemon [-] privsep: reply[94078421-cb2a-4008-92ab-2e638d458419]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.885 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.893 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.894 676 DEBUG oslo.privsep.daemon [-] privsep: reply[87c96f1a-0e63-4f15-bed8-3c5f2b39256a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.896 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.905 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.905 676 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd11a54-496c-4b13-b61f-928f5a65949e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.907 676 DEBUG oslo.privsep.daemon [-] privsep: reply[4938ee63-6b87-4dc7-b606-65cb46fba6ed]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.908 2 DEBUG oslo_concurrency.processutils [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.934 2 DEBUG oslo_concurrency.processutils [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.939 2 DEBUG os_brick.initiator.connectors.lightos [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.939 2 DEBUG os_brick.initiator.connectors.lightos [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.940 2 DEBUG os_brick.initiator.connectors.lightos [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.941 2 DEBUG os_brick.utils [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] <== get_connector_properties: return (82ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 11 00:00:43 np0005480824 nova_compute[260089]: 2025-10-11 04:00:43.941 2 DEBUG nova.virt.block_device [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Updating existing volume attachment record: 526a694a-3f9c-422d-a4f7-a6a740473f7f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 11 00:00:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Oct 11 00:00:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Oct 11 00:00:44 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Oct 11 00:00:44 np0005480824 nova_compute[260089]: 2025-10-11 04:00:44.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:44 np0005480824 nova_compute[260089]: 2025-10-11 04:00:44.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:00:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/294620546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:00:44 np0005480824 nova_compute[260089]: 2025-10-11 04:00:44.692 2 DEBUG nova.objects.instance [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'flavor' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:44 np0005480824 nova_compute[260089]: 2025-10-11 04:00:44.725 2 DEBUG nova.virt.libvirt.driver [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Attempting to attach volume a3d53ef3-723d-48ea-99be-18168e13b35b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct 11 00:00:44 np0005480824 nova_compute[260089]: 2025-10-11 04:00:44.729 2 DEBUG nova.virt.libvirt.guest [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 00:00:44 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:00:44 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-a3d53ef3-723d-48ea-99be-18168e13b35b">
Oct 11 00:00:44 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:44 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:00:44 np0005480824 nova_compute[260089]:  <auth username="openstack">
Oct 11 00:00:44 np0005480824 nova_compute[260089]:    <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:00:44 np0005480824 nova_compute[260089]:  </auth>
Oct 11 00:00:44 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:00:44 np0005480824 nova_compute[260089]:  <serial>a3d53ef3-723d-48ea-99be-18168e13b35b</serial>
Oct 11 00:00:44 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:00:44 np0005480824 nova_compute[260089]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 11 00:00:44 np0005480824 nova_compute[260089]: 2025-10-11 04:00:44.870 2 DEBUG nova.virt.libvirt.driver [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:44 np0005480824 nova_compute[260089]: 2025-10-11 04:00:44.871 2 DEBUG nova.virt.libvirt.driver [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:44 np0005480824 nova_compute[260089]: 2025-10-11 04:00:44.871 2 DEBUG nova.virt.libvirt.driver [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:44 np0005480824 nova_compute[260089]: 2025-10-11 04:00:44.872 2 DEBUG nova.virt.libvirt.driver [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No VIF found with MAC fa:16:3e:91:5e:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 00:00:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 194 op/s
Oct 11 00:00:45 np0005480824 podman[296767]: 2025-10-11 04:00:45.068823813 +0000 UTC m=+0.114155418 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible)
Oct 11 00:00:45 np0005480824 nova_compute[260089]: 2025-10-11 04:00:45.112 2 DEBUG oslo_concurrency.lockutils [None req-477efea4-501f-43ef-ab34-d38e231e1a22 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:45 np0005480824 nova_compute[260089]: 2025-10-11 04:00:45.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:00:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:00:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2273585718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:00:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:00:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Oct 11 00:00:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Oct 11 00:00:45 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Oct 11 00:00:45 np0005480824 ceph-osd[88325]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 11 00:00:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 2.5 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 209 KiB/s rd, 7.2 MiB/s wr, 153 op/s
Oct 11 00:00:47 np0005480824 nova_compute[260089]: 2025-10-11 04:00:47.943 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "42651a9c-7b98-4ad0-bf9d-430330b33968" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:47 np0005480824 nova_compute[260089]: 2025-10-11 04:00:47.943 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:47 np0005480824 nova_compute[260089]: 2025-10-11 04:00:47.961 2 DEBUG nova.compute.manager [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.040 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.041 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.050 2 DEBUG nova.virt.hardware [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.051 2 INFO nova.compute.claims [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.204 2 DEBUG oslo_concurrency.processutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.385 2 DEBUG oslo_concurrency.lockutils [None req-6aa94c14-7016-4124-bf46-737a00bfdad4 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.386 2 DEBUG oslo_concurrency.lockutils [None req-6aa94c14-7016-4124-bf46-737a00bfdad4 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.414 2 INFO nova.compute.manager [None req-6aa94c14-7016-4124-bf46-737a00bfdad4 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Detaching volume a3d53ef3-723d-48ea-99be-18168e13b35b#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.555 2 INFO nova.virt.block_device [None req-6aa94c14-7016-4124-bf46-737a00bfdad4 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Attempting to driver detach volume a3d53ef3-723d-48ea-99be-18168e13b35b from mountpoint /dev/vdb#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.570 2 DEBUG nova.virt.libvirt.driver [None req-6aa94c14-7016-4124-bf46-737a00bfdad4 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Attempting to detach device vdb from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.570 2 DEBUG nova.virt.libvirt.guest [None req-6aa94c14-7016-4124-bf46-737a00bfdad4 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 00:00:48 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:00:48 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-a3d53ef3-723d-48ea-99be-18168e13b35b">
Oct 11 00:00:48 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:48 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:00:48 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:00:48 np0005480824 nova_compute[260089]:  <serial>a3d53ef3-723d-48ea-99be-18168e13b35b</serial>
Oct 11 00:00:48 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 00:00:48 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:00:48 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 00:00:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:00:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1976736176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.719 2 DEBUG oslo_concurrency.processutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.726 2 DEBUG nova.compute.provider_tree [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.743 2 DEBUG nova.scheduler.client.report [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.773 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.774 2 DEBUG nova.compute.manager [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.827 2 DEBUG nova.compute.manager [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.828 2 DEBUG nova.network.neutron [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.891 2 INFO nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.915 2 DEBUG nova.compute.manager [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 00:00:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 2.6 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 293 KiB/s rd, 24 MiB/s wr, 177 op/s
Oct 11 00:00:48 np0005480824 nova_compute[260089]: 2025-10-11 04:00:48.968 2 INFO nova.virt.block_device [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Booting with volume 184f1559-4821-437a-8e6b-6e10ab7ba1e9 at /dev/vda#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.053 2 INFO nova.virt.libvirt.driver [None req-6aa94c14-7016-4124-bf46-737a00bfdad4 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Successfully detached device vdb from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the persistent domain config.#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.054 2 DEBUG nova.virt.libvirt.driver [None req-6aa94c14-7016-4124-bf46-737a00bfdad4 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.054 2 DEBUG nova.virt.libvirt.guest [None req-6aa94c14-7016-4124-bf46-737a00bfdad4 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 00:00:49 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:00:49 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-a3d53ef3-723d-48ea-99be-18168e13b35b">
Oct 11 00:00:49 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:49 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:00:49 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:00:49 np0005480824 nova_compute[260089]:  <serial>a3d53ef3-723d-48ea-99be-18168e13b35b</serial>
Oct 11 00:00:49 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 00:00:49 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:00:49 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.095 2 DEBUG os_brick.utils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.096 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.107 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.108 676 DEBUG oslo.privsep.daemon [-] privsep: reply[50f97ebc-b31b-4101-81df-06f5ceec19ab]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.109 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.122 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.122 676 DEBUG oslo.privsep.daemon [-] privsep: reply[99dca60d-14ec-4b04-a1cb-1e75ac973dd0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.123 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.136 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.137 676 DEBUG oslo.privsep.daemon [-] privsep: reply[b6148059-5558-4916-b6f1-7b66a5961690]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.137 676 DEBUG oslo.privsep.daemon [-] privsep: reply[88e37b76-9bc1-4b46-8078-281acc1897e3]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.138 2 DEBUG oslo_concurrency.processutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.172 2 DEBUG oslo_concurrency.processutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.174 2 DEBUG os_brick.initiator.connectors.lightos [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.175 2 DEBUG os_brick.initiator.connectors.lightos [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.175 2 DEBUG os_brick.initiator.connectors.lightos [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.175 2 DEBUG os_brick.utils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.176 2 DEBUG nova.virt.block_device [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Updating existing volume attachment record: 171e1407-8579-4d7f-8663-a70d2bf36b87 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.336 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Received event <DeviceRemovedEvent: 1760155249.3362484, d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.338 2 DEBUG nova.virt.libvirt.driver [None req-6aa94c14-7016-4124-bf46-737a00bfdad4 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.341 2 INFO nova.virt.libvirt.driver [None req-6aa94c14-7016-4124-bf46-737a00bfdad4 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Successfully detached device vdb from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the live domain config.#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.434 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155234.4336524, a010ce52-5e6a-44bd-8bc6-4151b2e1f528 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.435 2 INFO nova.compute.manager [-] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] VM Stopped (Lifecycle Event)#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.473 2 DEBUG nova.compute.manager [None req-4e9980d6-ed2d-4a4f-bbd3-71a58eb479d2 - - - - - -] [instance: a010ce52-5e6a-44bd-8bc6-4151b2e1f528] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.539 2 DEBUG nova.objects.instance [None req-6aa94c14-7016-4124-bf46-737a00bfdad4 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'flavor' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.595 2 DEBUG oslo_concurrency.lockutils [None req-6aa94c14-7016-4124-bf46-737a00bfdad4 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:49 np0005480824 nova_compute[260089]: 2025-10-11 04:00:49.617 2 DEBUG nova.policy [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eccc3f574d354840901d28dad2488bf4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0e73ded2f2ee46b4a7485c01ef1b73e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 00:00:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:00:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/126623307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:00:50 np0005480824 nova_compute[260089]: 2025-10-11 04:00:50.267 2 DEBUG nova.network.neutron [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Successfully created port: 189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 00:00:50 np0005480824 nova_compute[260089]: 2025-10-11 04:00:50.277 2 DEBUG nova.compute.manager [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 00:00:50 np0005480824 nova_compute[260089]: 2025-10-11 04:00:50.278 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 00:00:50 np0005480824 nova_compute[260089]: 2025-10-11 04:00:50.279 2 INFO nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Creating image(s)#033[00m
Oct 11 00:00:50 np0005480824 nova_compute[260089]: 2025-10-11 04:00:50.279 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 11 00:00:50 np0005480824 nova_compute[260089]: 2025-10-11 04:00:50.279 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Ensure instance console log exists: /var/lib/nova/instances/42651a9c-7b98-4ad0-bf9d-430330b33968/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 00:00:50 np0005480824 nova_compute[260089]: 2025-10-11 04:00:50.280 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:50 np0005480824 nova_compute[260089]: 2025-10-11 04:00:50.280 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:50 np0005480824 nova_compute[260089]: 2025-10-11 04:00:50.280 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:00:50 np0005480824 nova_compute[260089]: 2025-10-11 04:00:50.894 2 DEBUG nova.network.neutron [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Successfully updated port: 189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 00:00:50 np0005480824 nova_compute[260089]: 2025-10-11 04:00:50.920 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "refresh_cache-42651a9c-7b98-4ad0-bf9d-430330b33968" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:00:50 np0005480824 nova_compute[260089]: 2025-10-11 04:00:50.920 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquired lock "refresh_cache-42651a9c-7b98-4ad0-bf9d-430330b33968" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:00:50 np0005480824 nova_compute[260089]: 2025-10-11 04:00:50.920 2 DEBUG nova.network.neutron [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 00:00:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 2.6 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 158 KiB/s rd, 24 MiB/s wr, 106 op/s
Oct 11 00:00:51 np0005480824 nova_compute[260089]: 2025-10-11 04:00:51.027 2 DEBUG nova.compute.manager [req-b880f777-fe73-40ac-aa28-1d8d8c8c01f3 req-bafe60b8-ea0f-4e41-a3a3-8f4c47f697b1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Received event network-changed-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:00:51 np0005480824 nova_compute[260089]: 2025-10-11 04:00:51.028 2 DEBUG nova.compute.manager [req-b880f777-fe73-40ac-aa28-1d8d8c8c01f3 req-bafe60b8-ea0f-4e41-a3a3-8f4c47f697b1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Refreshing instance network info cache due to event network-changed-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 00:00:51 np0005480824 nova_compute[260089]: 2025-10-11 04:00:51.028 2 DEBUG oslo_concurrency.lockutils [req-b880f777-fe73-40ac-aa28-1d8d8c8c01f3 req-bafe60b8-ea0f-4e41-a3a3-8f4c47f697b1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-42651a9c-7b98-4ad0-bf9d-430330b33968" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:00:51 np0005480824 nova_compute[260089]: 2025-10-11 04:00:51.088 2 DEBUG nova.network.neutron [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 00:00:52 np0005480824 podman[296824]: 2025-10-11 04:00:52.004859382 +0000 UTC m=+0.053720367 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.685 2 DEBUG nova.network.neutron [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Updating instance_info_cache with network_info: [{"id": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "address": "fa:16:3e:4f:8c:b6", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189ca3df-84", "ovs_interfaceid": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.731 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Releasing lock "refresh_cache-42651a9c-7b98-4ad0-bf9d-430330b33968" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.732 2 DEBUG nova.compute.manager [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Instance network_info: |[{"id": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "address": "fa:16:3e:4f:8c:b6", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189ca3df-84", "ovs_interfaceid": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.732 2 DEBUG oslo_concurrency.lockutils [req-b880f777-fe73-40ac-aa28-1d8d8c8c01f3 req-bafe60b8-ea0f-4e41-a3a3-8f4c47f697b1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-42651a9c-7b98-4ad0-bf9d-430330b33968" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.732 2 DEBUG nova.network.neutron [req-b880f777-fe73-40ac-aa28-1d8d8c8c01f3 req-bafe60b8-ea0f-4e41-a3a3-8f4c47f697b1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Refreshing network info cache for port 189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.736 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Start _get_guest_xml network_info=[{"id": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "address": "fa:16:3e:4f:8c:b6", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189ca3df-84", "ovs_interfaceid": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '171e1407-8579-4d7f-8663-a70d2bf36b87', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-184f1559-4821-437a-8e6b-6e10ab7ba1e9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '184f1559-4821-437a-8e6b-6e10ab7ba1e9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '42651a9c-7b98-4ad0-bf9d-430330b33968', 'attached_at': '', 'detached_at': '', 'volume_id': '184f1559-4821-437a-8e6b-6e10ab7ba1e9', 'serial': '184f1559-4821-437a-8e6b-6e10ab7ba1e9'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.740 2 WARNING nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.755 2 DEBUG nova.virt.libvirt.host [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.756 2 DEBUG nova.virt.libvirt.host [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.757 2 DEBUG oslo_concurrency.lockutils [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.758 2 DEBUG oslo_concurrency.lockutils [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.767 2 DEBUG nova.virt.libvirt.host [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.768 2 DEBUG nova.virt.libvirt.host [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.768 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.769 2 DEBUG nova.virt.hardware [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.769 2 DEBUG nova.virt.hardware [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.769 2 DEBUG nova.virt.hardware [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.769 2 DEBUG nova.virt.hardware [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.769 2 DEBUG nova.virt.hardware [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.770 2 DEBUG nova.virt.hardware [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.770 2 DEBUG nova.virt.hardware [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.770 2 DEBUG nova.virt.hardware [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.770 2 DEBUG nova.virt.hardware [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.770 2 DEBUG nova.virt.hardware [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.770 2 DEBUG nova.virt.hardware [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.791 2 DEBUG nova.storage.rbd_utils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] rbd image 42651a9c-7b98-4ad0-bf9d-430330b33968_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.795 2 DEBUG oslo_concurrency.processutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.822 2 DEBUG nova.objects.instance [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'flavor' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:52 np0005480824 nova_compute[260089]: 2025-10-11 04:00:52.860 2 DEBUG oslo_concurrency.lockutils [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 3.0 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 221 KiB/s rd, 67 MiB/s wr, 226 op/s
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.142 2 DEBUG oslo_concurrency.lockutils [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.142 2 DEBUG oslo_concurrency.lockutils [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.143 2 INFO nova.compute.manager [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Attaching volume b269b0e7-bdf3-4393-8a56-46cacca9b6fd to /dev/vdb#033[00m
Oct 11 00:00:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:00:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/985444302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.248 2 DEBUG oslo_concurrency.processutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.377 2 DEBUG os_brick.utils [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.378 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.389 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.389 676 DEBUG oslo.privsep.daemon [-] privsep: reply[396c157d-c347-447b-b6dc-be5b1282b01f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.390 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.398 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.398 676 DEBUG oslo.privsep.daemon [-] privsep: reply[94e1a9e7-6ff0-4664-8b3c-3f38602dd4d7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.399 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.407 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.407 676 DEBUG oslo.privsep.daemon [-] privsep: reply[de2bab1b-2fbf-4978-bdb4-130287ad48f2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.408 676 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6ea5ef-248d-4e6c-a32c-fdab704fee28]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.408 2 DEBUG oslo_concurrency.processutils [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.429 2 DEBUG oslo_concurrency.processutils [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.431 2 DEBUG os_brick.initiator.connectors.lightos [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.431 2 DEBUG os_brick.initiator.connectors.lightos [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.432 2 DEBUG os_brick.initiator.connectors.lightos [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.432 2 DEBUG os_brick.utils [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] <== get_connector_properties: return (53ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.433 2 DEBUG nova.virt.block_device [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Updating existing volume attachment record: 53ff8bf7-c318-408f-bb04-66ebec05e1e7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.437 2 DEBUG os_brick.encryptors [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Using volume encryption metadata '{'encryption_key_id': '29c54708-a2c0-4244-b283-a3fbe91a10aa', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-184f1559-4821-437a-8e6b-6e10ab7ba1e9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '184f1559-4821-437a-8e6b-6e10ab7ba1e9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '42651a9c-7b98-4ad0-bf9d-430330b33968', 'attached_at': '', 'detached_at': '', 'volume_id': '184f1559-4821-437a-8e6b-6e10ab7ba1e9', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.439 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.463 2 DEBUG barbicanclient.v1.secrets [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.464 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.487 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.487 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.528 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.528 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.546 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.546 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.573 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.574 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.602 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.604 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.626 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.626 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.649 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.650 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.703 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.704 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.765 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.766 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.823 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.824 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.852 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.852 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.871 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.872 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.904 2 DEBUG nova.network.neutron [req-b880f777-fe73-40ac-aa28-1d8d8c8c01f3 req-bafe60b8-ea0f-4e41-a3a3-8f4c47f697b1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Updated VIF entry in instance network info cache for port 189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.905 2 DEBUG nova.network.neutron [req-b880f777-fe73-40ac-aa28-1d8d8c8c01f3 req-bafe60b8-ea0f-4e41-a3a3-8f4c47f697b1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Updating instance_info_cache with network_info: [{"id": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "address": "fa:16:3e:4f:8c:b6", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189ca3df-84", "ovs_interfaceid": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.925 2 DEBUG oslo_concurrency.lockutils [req-b880f777-fe73-40ac-aa28-1d8d8c8c01f3 req-bafe60b8-ea0f-4e41-a3a3-8f4c47f697b1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-42651a9c-7b98-4ad0-bf9d-430330b33968" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.962 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.963 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.993 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:53 np0005480824 nova_compute[260089]: 2025-10-11 04:00:53.994 2 INFO barbicanclient.base [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/29c54708-a2c0-4244-b283-a3fbe91a10aa#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.016 2 DEBUG barbicanclient.client [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.017 2 DEBUG nova.virt.libvirt.host [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <usage type="volume">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <volume>184f1559-4821-437a-8e6b-6e10ab7ba1e9</volume>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  </usage>
Oct 11 00:00:54 np0005480824 nova_compute[260089]: </secret>
Oct 11 00:00:54 np0005480824 nova_compute[260089]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct 11 00:00:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:00:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2315374175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.400 2 DEBUG nova.virt.libvirt.vif [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1118806847',display_name='tempest-TransferEncryptedVolumeTest-server-1118806847',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1118806847',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSd/imVrPUoZZ0pPNaeX2vqRyFwUZkYkGtRIGLvkZ+JmbpGCVAFlpb2xMevRN2guCRk7QItwPxlNbBPPCGkv6m7D9V9P6ik2vYr9GNZ8E+yfq+aSt3aD3tvswV1nTE1Iw==',key_name='tempest-TransferEncryptedVolumeTest-1221687079',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e73ded2f2ee46b4a7485c01ef1b73e9',ramdisk_id='',reservation_id='r-0mk5ma9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1815435088',owner_user_name='tempest-TransferEncryptedVolumeTest-1815435088-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:00:48Z,user_data=None,user_id='eccc3f574d354840901d28dad2488bf4',uuid=42651a9c-7b98-4ad0-bf9d-430330b33968,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "address": "fa:16:3e:4f:8c:b6", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189ca3df-84", "ovs_interfaceid": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.401 2 DEBUG nova.network.os_vif_util [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converting VIF {"id": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "address": "fa:16:3e:4f:8c:b6", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189ca3df-84", "ovs_interfaceid": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.402 2 DEBUG nova.network.os_vif_util [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8c:b6,bridge_name='br-int',has_traffic_filtering=True,id=189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap189ca3df-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.403 2 DEBUG nova.objects.instance [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 42651a9c-7b98-4ad0-bf9d-430330b33968 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.430 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] End _get_guest_xml xml=<domain type="kvm">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <uuid>42651a9c-7b98-4ad0-bf9d-430330b33968</uuid>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <name>instance-00000019</name>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <metadata>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1118806847</nova:name>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 04:00:52</nova:creationTime>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:        <nova:user uuid="eccc3f574d354840901d28dad2488bf4">tempest-TransferEncryptedVolumeTest-1815435088-project-member</nova:user>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:        <nova:project uuid="0e73ded2f2ee46b4a7485c01ef1b73e9">tempest-TransferEncryptedVolumeTest-1815435088</nova:project>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:        <nova:port uuid="189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:        </nova:port>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  </metadata>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <system>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <entry name="serial">42651a9c-7b98-4ad0-bf9d-430330b33968</entry>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <entry name="uuid">42651a9c-7b98-4ad0-bf9d-430330b33968</entry>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    </system>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <os>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  </os>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <features>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <acpi/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <apic/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  </features>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  </clock>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  </cpu>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <devices>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/42651a9c-7b98-4ad0-bf9d-430330b33968_disk.config">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      </source>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      </auth>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    </disk>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-184f1559-4821-437a-8e6b-6e10ab7ba1e9">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      </source>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      </auth>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <serial>184f1559-4821-437a-8e6b-6e10ab7ba1e9</serial>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <encryption format="luks">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:        <secret type="passphrase" uuid="7df261e3-d65a-4f91-998f-868d860869db"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      </encryption>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    </disk>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:4f:8c:b6"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <target dev="tap189ca3df-84"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    </interface>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/42651a9c-7b98-4ad0-bf9d-430330b33968/console.log" append="off"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    </serial>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <video>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    </video>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    </rng>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    </memballoon>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  </devices>
Oct 11 00:00:54 np0005480824 nova_compute[260089]: </domain>
Oct 11 00:00:54 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.431 2 DEBUG nova.compute.manager [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Preparing to wait for external event network-vif-plugged-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.431 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.431 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.432 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.432 2 DEBUG nova.virt.libvirt.vif [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1118806847',display_name='tempest-TransferEncryptedVolumeTest-server-1118806847',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1118806847',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSd/imVrPUoZZ0pPNaeX2vqRyFwUZkYkGtRIGLvkZ+JmbpGCVAFlpb2xMevRN2guCRk7QItwPxlNbBPPCGkv6m7D9V9P6ik2vYr9GNZ8E+yfq+aSt3aD3tvswV1nTE1Iw==',key_name='tempest-TransferEncryptedVolumeTest-1221687079',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e73ded2f2ee46b4a7485c01ef1b73e9',ramdisk_id='',reservation_id='r-0mk5ma9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1815435088',owner_user_name='tempest-TransferEncryptedVolumeTest-1815435088-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:00:48Z,user_data=None,user_id='eccc3f574d354840901d28dad2488bf4',uuid=42651a9c-7b98-4ad0-bf9d-430330b33968,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "address": "fa:16:3e:4f:8c:b6", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189ca3df-84", "ovs_interfaceid": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.432 2 DEBUG nova.network.os_vif_util [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converting VIF {"id": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "address": "fa:16:3e:4f:8c:b6", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189ca3df-84", "ovs_interfaceid": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.433 2 DEBUG nova.network.os_vif_util [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8c:b6,bridge_name='br-int',has_traffic_filtering=True,id=189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap189ca3df-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.433 2 DEBUG os_vif [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8c:b6,bridge_name='br-int',has_traffic_filtering=True,id=189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap189ca3df-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap189ca3df-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.437 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap189ca3df-84, col_values=(('external_ids', {'iface-id': '189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:8c:b6', 'vm-uuid': '42651a9c-7b98-4ad0-bf9d-430330b33968'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:54 np0005480824 NetworkManager[44969]: <info>  [1760155254.4393] manager: (tap189ca3df-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.447 2 INFO os_vif [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8c:b6,bridge_name='br-int',has_traffic_filtering=True,id=189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap189ca3df-84')#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.471 2 DEBUG nova.objects.instance [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'flavor' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.493 2 DEBUG nova.virt.libvirt.driver [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Attempting to attach volume b269b0e7-bdf3-4393-8a56-46cacca9b6fd with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.495 2 DEBUG nova.virt.libvirt.guest [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-b269b0e7-bdf3-4393-8a56-46cacca9b6fd">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <auth username="openstack">
Oct 11 00:00:54 np0005480824 nova_compute[260089]:    <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  </auth>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:00:54 np0005480824 nova_compute[260089]:  <serial>b269b0e7-bdf3-4393-8a56-46cacca9b6fd</serial>
Oct 11 00:00:54 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:00:54 np0005480824 nova_compute[260089]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.783 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.783 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.783 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] No VIF found with MAC fa:16:3e:4f:8c:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.784 2 INFO nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Using config drive#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.806 2 DEBUG nova.storage.rbd_utils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] rbd image 42651a9c-7b98-4ad0-bf9d-430330b33968_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.897 2 DEBUG nova.virt.libvirt.driver [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.898 2 DEBUG nova.virt.libvirt.driver [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.898 2 DEBUG nova.virt.libvirt.driver [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:00:54 np0005480824 nova_compute[260089]: 2025-10-11 04:00:54.898 2 DEBUG nova.virt.libvirt.driver [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] No VIF found with MAC fa:16:3e:91:5e:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 00:00:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 3.0 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 194 KiB/s rd, 59 MiB/s wr, 199 op/s
Oct 11 00:00:55 np0005480824 nova_compute[260089]: 2025-10-11 04:00:55.207 2 DEBUG oslo_concurrency.lockutils [None req-d7955095-022d-4544-afb8-e7bff05db1e5 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:55 np0005480824 nova_compute[260089]: 2025-10-11 04:00:55.214 2 INFO nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Creating config drive at /var/lib/nova/instances/42651a9c-7b98-4ad0-bf9d-430330b33968/disk.config#033[00m
Oct 11 00:00:55 np0005480824 nova_compute[260089]: 2025-10-11 04:00:55.222 2 DEBUG oslo_concurrency.processutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/42651a9c-7b98-4ad0-bf9d-430330b33968/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0i4v95t3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:55 np0005480824 nova_compute[260089]: 2025-10-11 04:00:55.362 2 DEBUG oslo_concurrency.processutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/42651a9c-7b98-4ad0-bf9d-430330b33968/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0i4v95t3" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:55 np0005480824 nova_compute[260089]: 2025-10-11 04:00:55.440 2 DEBUG nova.storage.rbd_utils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] rbd image 42651a9c-7b98-4ad0-bf9d-430330b33968_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:00:55 np0005480824 nova_compute[260089]: 2025-10-11 04:00:55.444 2 DEBUG oslo_concurrency.processutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/42651a9c-7b98-4ad0-bf9d-430330b33968/disk.config 42651a9c-7b98-4ad0-bf9d-430330b33968_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:00:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:00:56 np0005480824 nova_compute[260089]: 2025-10-11 04:00:56.066 2 DEBUG oslo_concurrency.processutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/42651a9c-7b98-4ad0-bf9d-430330b33968/disk.config 42651a9c-7b98-4ad0-bf9d-430330b33968_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:00:56 np0005480824 nova_compute[260089]: 2025-10-11 04:00:56.067 2 INFO nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Deleting local config drive /var/lib/nova/instances/42651a9c-7b98-4ad0-bf9d-430330b33968/disk.config because it was imported into RBD.#033[00m
Oct 11 00:00:56 np0005480824 kernel: tap189ca3df-84: entered promiscuous mode
Oct 11 00:00:56 np0005480824 NetworkManager[44969]: <info>  [1760155256.1308] manager: (tap189ca3df-84): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Oct 11 00:00:56 np0005480824 nova_compute[260089]: 2025-10-11 04:00:56.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:56 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:56Z|00224|binding|INFO|Claiming lport 189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 for this chassis.
Oct 11 00:00:56 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:56Z|00225|binding|INFO|189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3: Claiming fa:16:3e:4f:8c:b6 10.100.0.3
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.144 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:8c:b6 10.100.0.3'], port_security=['fa:16:3e:4f:8c:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '42651a9c-7b98-4ad0-bf9d-430330b33968', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e73ded2f2ee46b4a7485c01ef1b73e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b966caac-3def-4c2a-badc-a92b0de92fd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f608fb9-f693-4a11-9617-6172f3d025df, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.146 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 in datapath 15a62ee0-8e34-4e49-990e-246b4ef9e0c6 bound to our chassis#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.147 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15a62ee0-8e34-4e49-990e-246b4ef9e0c6#033[00m
Oct 11 00:00:56 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:56Z|00226|binding|INFO|Setting lport 189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 ovn-installed in OVS
Oct 11 00:00:56 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:56Z|00227|binding|INFO|Setting lport 189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 up in Southbound
Oct 11 00:00:56 np0005480824 nova_compute[260089]: 2025-10-11 04:00:56.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.158 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[574da694-2e0f-4e32-9167-eb074f1d18a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.161 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap15a62ee0-81 in ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.163 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap15a62ee0-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.163 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e371e1f8-96c6-46a3-80e6-419dcb13d74e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.164 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2907ec2d-f900-4b91-918e-39e5aae911ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 systemd-udevd[296984]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.176 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[0ebc323c-e630-49a9-92a1-3dc89b3c1c7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 systemd-machined[215071]: New machine qemu-25-instance-00000019.
Oct 11 00:00:56 np0005480824 NetworkManager[44969]: <info>  [1760155256.1855] device (tap189ca3df-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 00:00:56 np0005480824 NetworkManager[44969]: <info>  [1760155256.1865] device (tap189ca3df-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 00:00:56 np0005480824 systemd[1]: Started Virtual Machine qemu-25-instance-00000019.
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.201 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c5890ecd-8eb4-42bd-a807-0da502e8094b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.234 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f05553-fc8d-448f-b358-a7656225de2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 NetworkManager[44969]: <info>  [1760155256.2410] manager: (tap15a62ee0-80): new Veth device (/org/freedesktop/NetworkManager/Devices/125)
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.240 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[028f6eb3-b53f-4059-8e25-cd05c3c8c3a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.270 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[7151832c-81c7-49b0-867f-a820d7bbf0ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.273 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[a1db2710-4904-43a4-92b6-8a09a3845f3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 NetworkManager[44969]: <info>  [1760155256.2913] device (tap15a62ee0-80): carrier: link connected
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.294 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[32a194b6-6b06-4a23-b5ae-8044790f399d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.322 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0702a1-1b53-49be-b03c-6fc90909671b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15a62ee0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:91:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472395, 'reachable_time': 23180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297017, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.339 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[14f67504-4ed9-497b-a1e2-ef3e58eda859]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:91d9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472395, 'tstamp': 472395}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297018, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.355 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[a57b7ce9-2a6d-4139-a04e-3f98ae429dfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15a62ee0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:91:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472395, 'reachable_time': 23180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297019, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.382 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4a461aad-e7d1-4143-83a9-1d1da2ca21d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.440 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[58252f60-1e51-4c23-b711-5b7d52b90640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.442 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15a62ee0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.442 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.442 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15a62ee0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:56 np0005480824 NetworkManager[44969]: <info>  [1760155256.4454] manager: (tap15a62ee0-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Oct 11 00:00:56 np0005480824 nova_compute[260089]: 2025-10-11 04:00:56.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:56 np0005480824 kernel: tap15a62ee0-80: entered promiscuous mode
Oct 11 00:00:56 np0005480824 nova_compute[260089]: 2025-10-11 04:00:56.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.450 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15a62ee0-80, col_values=(('external_ids', {'iface-id': '182275c4-a015-4f7a-8877-9961b2382f67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:00:56 np0005480824 ovn_controller[152667]: 2025-10-11T04:00:56Z|00228|binding|INFO|Releasing lport 182275c4-a015-4f7a-8877-9961b2382f67 from this chassis (sb_readonly=0)
Oct 11 00:00:56 np0005480824 nova_compute[260089]: 2025-10-11 04:00:56.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:56 np0005480824 nova_compute[260089]: 2025-10-11 04:00:56.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.470 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.471 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d02bfe06-7938-4e89-b9cd-3287140dc0e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.472 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: global
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-15a62ee0-8e34-4e49-990e-246b4ef9e0c6
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.pid.haproxy
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 15a62ee0-8e34-4e49-990e-246b4ef9e0c6
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 00:00:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:00:56.473 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'env', 'PROCESS_TAG=haproxy-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 00:00:56 np0005480824 nova_compute[260089]: 2025-10-11 04:00:56.803 2 DEBUG nova.compute.manager [req-465c53a2-807a-4c82-9fa8-6dcce31d1e70 req-e6df226d-1cad-4355-b421-f8519e86ecd1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Received event network-vif-plugged-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:00:56 np0005480824 nova_compute[260089]: 2025-10-11 04:00:56.803 2 DEBUG oslo_concurrency.lockutils [req-465c53a2-807a-4c82-9fa8-6dcce31d1e70 req-e6df226d-1cad-4355-b421-f8519e86ecd1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:56 np0005480824 nova_compute[260089]: 2025-10-11 04:00:56.803 2 DEBUG oslo_concurrency.lockutils [req-465c53a2-807a-4c82-9fa8-6dcce31d1e70 req-e6df226d-1cad-4355-b421-f8519e86ecd1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:56 np0005480824 nova_compute[260089]: 2025-10-11 04:00:56.803 2 DEBUG oslo_concurrency.lockutils [req-465c53a2-807a-4c82-9fa8-6dcce31d1e70 req-e6df226d-1cad-4355-b421-f8519e86ecd1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:56 np0005480824 nova_compute[260089]: 2025-10-11 04:00:56.804 2 DEBUG nova.compute.manager [req-465c53a2-807a-4c82-9fa8-6dcce31d1e70 req-e6df226d-1cad-4355-b421-f8519e86ecd1 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Processing event network-vif-plugged-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 00:00:56 np0005480824 podman[297085]: 2025-10-11 04:00:56.916311514 +0000 UTC m=+0.054389812 container create 28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 11 00:00:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 3.1 GiB data, 3.3 GiB used, 57 GiB / 60 GiB avail; 223 KiB/s rd, 61 MiB/s wr, 246 op/s
Oct 11 00:00:56 np0005480824 podman[297085]: 2025-10-11 04:00:56.889848035 +0000 UTC m=+0.027926353 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 00:00:57 np0005480824 systemd[1]: Started libpod-conmon-28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6.scope.
Oct 11 00:00:57 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:00:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878c19085e3530befade517f046119ac39a8b06e9f1bc3a674acdfdc6d24de9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 00:00:57 np0005480824 podman[297085]: 2025-10-11 04:00:57.128799912 +0000 UTC m=+0.266878240 container init 28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 11 00:00:57 np0005480824 podman[297085]: 2025-10-11 04:00:57.137461312 +0000 UTC m=+0.275539610 container start 28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 00:00:57 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[297100]: [NOTICE]   (297104) : New worker (297106) forked
Oct 11 00:00:57 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[297100]: [NOTICE]   (297104) : Loading success.
Oct 11 00:00:57 np0005480824 nova_compute[260089]: 2025-10-11 04:00:57.806 2 DEBUG oslo_concurrency.lockutils [None req-70059496-075d-4b25-b7d4-30cc1c5f0554 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:57 np0005480824 nova_compute[260089]: 2025-10-11 04:00:57.807 2 DEBUG oslo_concurrency.lockutils [None req-70059496-075d-4b25-b7d4-30cc1c5f0554 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:57 np0005480824 nova_compute[260089]: 2025-10-11 04:00:57.828 2 INFO nova.compute.manager [None req-70059496-075d-4b25-b7d4-30cc1c5f0554 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Detaching volume b269b0e7-bdf3-4393-8a56-46cacca9b6fd#033[00m
Oct 11 00:00:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:00:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:00:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:00:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:00:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:00:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:00:57 np0005480824 nova_compute[260089]: 2025-10-11 04:00:57.964 2 INFO nova.virt.block_device [None req-70059496-075d-4b25-b7d4-30cc1c5f0554 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Attempting to driver detach volume b269b0e7-bdf3-4393-8a56-46cacca9b6fd from mountpoint /dev/vdb#033[00m
Oct 11 00:00:57 np0005480824 nova_compute[260089]: 2025-10-11 04:00:57.977 2 DEBUG nova.virt.libvirt.driver [None req-70059496-075d-4b25-b7d4-30cc1c5f0554 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Attempting to detach device vdb from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 11 00:00:57 np0005480824 nova_compute[260089]: 2025-10-11 04:00:57.978 2 DEBUG nova.virt.libvirt.guest [None req-70059496-075d-4b25-b7d4-30cc1c5f0554 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 00:00:57 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:00:57 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-b269b0e7-bdf3-4393-8a56-46cacca9b6fd">
Oct 11 00:00:57 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:57 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:00:57 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:00:57 np0005480824 nova_compute[260089]:  <serial>b269b0e7-bdf3-4393-8a56-46cacca9b6fd</serial>
Oct 11 00:00:57 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 00:00:57 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:00:57 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 00:00:57 np0005480824 nova_compute[260089]: 2025-10-11 04:00:57.984 2 INFO nova.virt.libvirt.driver [None req-70059496-075d-4b25-b7d4-30cc1c5f0554 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Successfully detached device vdb from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the persistent domain config.#033[00m
Oct 11 00:00:57 np0005480824 nova_compute[260089]: 2025-10-11 04:00:57.985 2 DEBUG nova.virt.libvirt.driver [None req-70059496-075d-4b25-b7d4-30cc1c5f0554 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 11 00:00:57 np0005480824 nova_compute[260089]: 2025-10-11 04:00:57.985 2 DEBUG nova.virt.libvirt.guest [None req-70059496-075d-4b25-b7d4-30cc1c5f0554 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 00:00:57 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:00:57 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-b269b0e7-bdf3-4393-8a56-46cacca9b6fd">
Oct 11 00:00:57 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:00:57 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:00:57 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:00:57 np0005480824 nova_compute[260089]:  <serial>b269b0e7-bdf3-4393-8a56-46cacca9b6fd</serial>
Oct 11 00:00:57 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 00:00:57 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:00:57 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 00:00:58 np0005480824 nova_compute[260089]: 2025-10-11 04:00:58.098 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Received event <DeviceRemovedEvent: 1760155258.0978968, d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 11 00:00:58 np0005480824 nova_compute[260089]: 2025-10-11 04:00:58.099 2 DEBUG nova.virt.libvirt.driver [None req-70059496-075d-4b25-b7d4-30cc1c5f0554 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 11 00:00:58 np0005480824 nova_compute[260089]: 2025-10-11 04:00:58.101 2 INFO nova.virt.libvirt.driver [None req-70059496-075d-4b25-b7d4-30cc1c5f0554 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Successfully detached device vdb from instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 from the live domain config.#033[00m
Oct 11 00:00:58 np0005480824 nova_compute[260089]: 2025-10-11 04:00:58.268 2 DEBUG nova.objects.instance [None req-70059496-075d-4b25-b7d4-30cc1c5f0554 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'flavor' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:00:58 np0005480824 nova_compute[260089]: 2025-10-11 04:00:58.304 2 DEBUG oslo_concurrency.lockutils [None req-70059496-075d-4b25-b7d4-30cc1c5f0554 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:58 np0005480824 nova_compute[260089]: 2025-10-11 04:00:58.917 2 DEBUG nova.compute.manager [req-961c7a7b-8211-44cb-9847-f37153ef609b req-0c9d9ffa-5b61-4754-a7bc-c3c068b3202b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Received event network-vif-plugged-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:00:58 np0005480824 nova_compute[260089]: 2025-10-11 04:00:58.917 2 DEBUG oslo_concurrency.lockutils [req-961c7a7b-8211-44cb-9847-f37153ef609b req-0c9d9ffa-5b61-4754-a7bc-c3c068b3202b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:00:58 np0005480824 nova_compute[260089]: 2025-10-11 04:00:58.917 2 DEBUG oslo_concurrency.lockutils [req-961c7a7b-8211-44cb-9847-f37153ef609b req-0c9d9ffa-5b61-4754-a7bc-c3c068b3202b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:00:58 np0005480824 nova_compute[260089]: 2025-10-11 04:00:58.918 2 DEBUG oslo_concurrency.lockutils [req-961c7a7b-8211-44cb-9847-f37153ef609b req-0c9d9ffa-5b61-4754-a7bc-c3c068b3202b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:00:58 np0005480824 nova_compute[260089]: 2025-10-11 04:00:58.918 2 DEBUG nova.compute.manager [req-961c7a7b-8211-44cb-9847-f37153ef609b req-0c9d9ffa-5b61-4754-a7bc-c3c068b3202b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] No waiting events found dispatching network-vif-plugged-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:00:58 np0005480824 nova_compute[260089]: 2025-10-11 04:00:58.918 2 WARNING nova.compute.manager [req-961c7a7b-8211-44cb-9847-f37153ef609b req-0c9d9ffa-5b61-4754-a7bc-c3c068b3202b 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Received unexpected event network-vif-plugged-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 00:00:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 3.4 GiB data, 3.6 GiB used, 56 GiB / 60 GiB avail; 255 KiB/s rd, 77 MiB/s wr, 223 op/s
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.531 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155259.5308232, 42651a9c-7b98-4ad0-bf9d-430330b33968 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.531 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] VM Started (Lifecycle Event)#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.534 2 DEBUG nova.compute.manager [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.539 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.543 2 INFO nova.virt.libvirt.driver [-] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Instance spawned successfully.#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.543 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.564 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.568 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.575 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.575 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.576 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.576 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.576 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.577 2 DEBUG nova.virt.libvirt.driver [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.588 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.588 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155259.533904, 42651a9c-7b98-4ad0-bf9d-430330b33968 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.589 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] VM Paused (Lifecycle Event)#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.617 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.621 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155259.5380867, 42651a9c-7b98-4ad0-bf9d-430330b33968 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.622 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] VM Resumed (Lifecycle Event)#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.644 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.650 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.661 2 INFO nova.compute.manager [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Took 9.38 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.661 2 DEBUG nova.compute.manager [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.676 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.756 2 INFO nova.compute.manager [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Took 11.75 seconds to build instance.#033[00m
Oct 11 00:00:59 np0005480824 nova_compute[260089]: 2025-10-11 04:00:59.772 2 DEBUG oslo_concurrency.lockutils [None req-d0095622-37c8-4413-8289-c86755cf6ef6 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:01:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:01:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3562017121' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:01:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:01:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3562017121' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:01:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:01:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 3.4 GiB data, 3.6 GiB used, 56 GiB / 60 GiB avail; 170 KiB/s rd, 65 MiB/s wr, 186 op/s
Oct 11 00:01:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Oct 11 00:01:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Oct 11 00:01:01 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Oct 11 00:01:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:01:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4294761764' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:01:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:01:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4294761764' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2982403043' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2982403043' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:01:02 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev c08159de-40a8-4366-b150-4965c67a8a27 does not exist
Oct 11 00:01:02 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 51ea233b-b28c-47db-aef1-1bfc7554cb2b does not exist
Oct 11 00:01:02 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev cd1f76ee-6548-445a-a09b-2cee5d8dc6b9 does not exist
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:01:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:01:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 2.6 MiB/s rd, 44 MiB/s wr, 349 op/s
Oct 11 00:01:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:01:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/70834809' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:01:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:01:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/70834809' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:01:03 np0005480824 nova_compute[260089]: 2025-10-11 04:01:03.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:03.107 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:01:03 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:03.109 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 00:01:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:01:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:01:03 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:01:03 np0005480824 nova_compute[260089]: 2025-10-11 04:01:03.124 2 DEBUG nova.compute.manager [req-3e98252b-7ccb-429b-ae82-c7b27737772c req-cded2190-4ef4-44d0-846d-ca9bc75c930d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Received event network-changed-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:01:03 np0005480824 nova_compute[260089]: 2025-10-11 04:01:03.124 2 DEBUG nova.compute.manager [req-3e98252b-7ccb-429b-ae82-c7b27737772c req-cded2190-4ef4-44d0-846d-ca9bc75c930d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Refreshing instance network info cache due to event network-changed-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 00:01:03 np0005480824 nova_compute[260089]: 2025-10-11 04:01:03.124 2 DEBUG oslo_concurrency.lockutils [req-3e98252b-7ccb-429b-ae82-c7b27737772c req-cded2190-4ef4-44d0-846d-ca9bc75c930d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-42651a9c-7b98-4ad0-bf9d-430330b33968" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:01:03 np0005480824 nova_compute[260089]: 2025-10-11 04:01:03.124 2 DEBUG oslo_concurrency.lockutils [req-3e98252b-7ccb-429b-ae82-c7b27737772c req-cded2190-4ef4-44d0-846d-ca9bc75c930d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-42651a9c-7b98-4ad0-bf9d-430330b33968" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:01:03 np0005480824 nova_compute[260089]: 2025-10-11 04:01:03.125 2 DEBUG nova.network.neutron [req-3e98252b-7ccb-429b-ae82-c7b27737772c req-cded2190-4ef4-44d0-846d-ca9bc75c930d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Refreshing network info cache for port 189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 00:01:03 np0005480824 podman[297526]: 2025-10-11 04:01:03.637726695 +0000 UTC m=+0.021976747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:01:03 np0005480824 podman[297526]: 2025-10-11 04:01:03.764661516 +0000 UTC m=+0.148911548 container create 2cb1da23556fae6cce56323b716eb413d73d09acf92cad7fcd2021bbc1a9a3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:01:03 np0005480824 systemd[1]: Started libpod-conmon-2cb1da23556fae6cce56323b716eb413d73d09acf92cad7fcd2021bbc1a9a3d6.scope.
Oct 11 00:01:03 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:01:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Oct 11 00:01:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Oct 11 00:01:03 np0005480824 podman[297526]: 2025-10-11 04:01:03.893832377 +0000 UTC m=+0.278082499 container init 2cb1da23556fae6cce56323b716eb413d73d09acf92cad7fcd2021bbc1a9a3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:01:03 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Oct 11 00:01:03 np0005480824 podman[297526]: 2025-10-11 04:01:03.912583278 +0000 UTC m=+0.296833310 container start 2cb1da23556fae6cce56323b716eb413d73d09acf92cad7fcd2021bbc1a9a3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 00:01:03 np0005480824 podman[297526]: 2025-10-11 04:01:03.916171981 +0000 UTC m=+0.300422103 container attach 2cb1da23556fae6cce56323b716eb413d73d09acf92cad7fcd2021bbc1a9a3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:01:03 np0005480824 elastic_yalow[297542]: 167 167
Oct 11 00:01:03 np0005480824 systemd[1]: libpod-2cb1da23556fae6cce56323b716eb413d73d09acf92cad7fcd2021bbc1a9a3d6.scope: Deactivated successfully.
Oct 11 00:01:03 np0005480824 podman[297526]: 2025-10-11 04:01:03.925094706 +0000 UTC m=+0.309344748 container died 2cb1da23556fae6cce56323b716eb413d73d09acf92cad7fcd2021bbc1a9a3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 00:01:03 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d4ff526dcd3a5acb4a7f53545c6d4518071287cba785781a7664f19a2f2d584a-merged.mount: Deactivated successfully.
Oct 11 00:01:03 np0005480824 podman[297526]: 2025-10-11 04:01:03.973159722 +0000 UTC m=+0.357409774 container remove 2cb1da23556fae6cce56323b716eb413d73d09acf92cad7fcd2021bbc1a9a3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:01:04 np0005480824 systemd[1]: libpod-conmon-2cb1da23556fae6cce56323b716eb413d73d09acf92cad7fcd2021bbc1a9a3d6.scope: Deactivated successfully.
Oct 11 00:01:04 np0005480824 podman[297567]: 2025-10-11 04:01:04.16302436 +0000 UTC m=+0.049950440 container create 409b7fab04eeaf1c0e82e5426b6b6faeec9f0154e5a8d7035ac026e04b58d5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_heyrovsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:01:04 np0005480824 systemd[1]: Started libpod-conmon-409b7fab04eeaf1c0e82e5426b6b6faeec9f0154e5a8d7035ac026e04b58d5a0.scope.
Oct 11 00:01:04 np0005480824 podman[297567]: 2025-10-11 04:01:04.14477811 +0000 UTC m=+0.031704230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:01:04 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:01:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162c72567e09befcc84ef7afc978baccc2113126505bb10091fd627fe07c41ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:01:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162c72567e09befcc84ef7afc978baccc2113126505bb10091fd627fe07c41ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:01:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162c72567e09befcc84ef7afc978baccc2113126505bb10091fd627fe07c41ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:01:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162c72567e09befcc84ef7afc978baccc2113126505bb10091fd627fe07c41ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:01:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162c72567e09befcc84ef7afc978baccc2113126505bb10091fd627fe07c41ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 00:01:04 np0005480824 podman[297567]: 2025-10-11 04:01:04.28081573 +0000 UTC m=+0.167741910 container init 409b7fab04eeaf1c0e82e5426b6b6faeec9f0154e5a8d7035ac026e04b58d5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_heyrovsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:01:04 np0005480824 podman[297567]: 2025-10-11 04:01:04.295822635 +0000 UTC m=+0.182748735 container start 409b7fab04eeaf1c0e82e5426b6b6faeec9f0154e5a8d7035ac026e04b58d5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 00:01:04 np0005480824 podman[297567]: 2025-10-11 04:01:04.300850931 +0000 UTC m=+0.187777111 container attach 409b7fab04eeaf1c0e82e5426b6b6faeec9f0154e5a8d7035ac026e04b58d5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:01:04 np0005480824 nova_compute[260089]: 2025-10-11 04:01:04.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:04 np0005480824 nova_compute[260089]: 2025-10-11 04:01:04.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:04 np0005480824 nova_compute[260089]: 2025-10-11 04:01:04.460 2 DEBUG nova.network.neutron [req-3e98252b-7ccb-429b-ae82-c7b27737772c req-cded2190-4ef4-44d0-846d-ca9bc75c930d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Updated VIF entry in instance network info cache for port 189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 00:01:04 np0005480824 nova_compute[260089]: 2025-10-11 04:01:04.461 2 DEBUG nova.network.neutron [req-3e98252b-7ccb-429b-ae82-c7b27737772c req-cded2190-4ef4-44d0-846d-ca9bc75c930d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Updating instance_info_cache with network_info: [{"id": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "address": "fa:16:3e:4f:8c:b6", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189ca3df-84", "ovs_interfaceid": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:01:04 np0005480824 nova_compute[260089]: 2025-10-11 04:01:04.504 2 DEBUG oslo_concurrency.lockutils [req-3e98252b-7ccb-429b-ae82-c7b27737772c req-cded2190-4ef4-44d0-846d-ca9bc75c930d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-42651a9c-7b98-4ad0-bf9d-430330b33968" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:01:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Oct 11 00:01:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Oct 11 00:01:04 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Oct 11 00:01:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 4.0 MiB/s rd, 9.3 MiB/s wr, 398 op/s
Oct 11 00:01:05 np0005480824 serene_heyrovsky[297584]: --> passed data devices: 0 physical, 3 LVM
Oct 11 00:01:05 np0005480824 serene_heyrovsky[297584]: --> relative data size: 1.0
Oct 11 00:01:05 np0005480824 serene_heyrovsky[297584]: --> All data devices are unavailable
Oct 11 00:01:05 np0005480824 systemd[1]: libpod-409b7fab04eeaf1c0e82e5426b6b6faeec9f0154e5a8d7035ac026e04b58d5a0.scope: Deactivated successfully.
Oct 11 00:01:05 np0005480824 systemd[1]: libpod-409b7fab04eeaf1c0e82e5426b6b6faeec9f0154e5a8d7035ac026e04b58d5a0.scope: Consumed 1.065s CPU time.
Oct 11 00:01:05 np0005480824 podman[297613]: 2025-10-11 04:01:05.487267795 +0000 UTC m=+0.030330289 container died 409b7fab04eeaf1c0e82e5426b6b6faeec9f0154e5a8d7035ac026e04b58d5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 00:01:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:01:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Oct 11 00:01:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Oct 11 00:01:05 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Oct 11 00:01:05 np0005480824 systemd[1]: var-lib-containers-storage-overlay-162c72567e09befcc84ef7afc978baccc2113126505bb10091fd627fe07c41ec-merged.mount: Deactivated successfully.
Oct 11 00:01:05 np0005480824 podman[297613]: 2025-10-11 04:01:05.720847339 +0000 UTC m=+0.263909813 container remove 409b7fab04eeaf1c0e82e5426b6b6faeec9f0154e5a8d7035ac026e04b58d5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_heyrovsky, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:01:05 np0005480824 systemd[1]: libpod-conmon-409b7fab04eeaf1c0e82e5426b6b6faeec9f0154e5a8d7035ac026e04b58d5a0.scope: Deactivated successfully.
Oct 11 00:01:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:01:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1512177918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:01:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:01:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1512177918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:01:06 np0005480824 podman[297768]: 2025-10-11 04:01:06.425271815 +0000 UTC m=+0.040820100 container create 9906174d0527c9faa870314ce8501d6cbaf84264171d3d9abb5a75b60d8d3937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 00:01:06 np0005480824 systemd[1]: Started libpod-conmon-9906174d0527c9faa870314ce8501d6cbaf84264171d3d9abb5a75b60d8d3937.scope.
Oct 11 00:01:06 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:01:06 np0005480824 podman[297768]: 2025-10-11 04:01:06.40985062 +0000 UTC m=+0.025398915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:01:06 np0005480824 podman[297768]: 2025-10-11 04:01:06.510470575 +0000 UTC m=+0.126018880 container init 9906174d0527c9faa870314ce8501d6cbaf84264171d3d9abb5a75b60d8d3937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 00:01:06 np0005480824 podman[297768]: 2025-10-11 04:01:06.51636173 +0000 UTC m=+0.131910025 container start 9906174d0527c9faa870314ce8501d6cbaf84264171d3d9abb5a75b60d8d3937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lederberg, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:01:06 np0005480824 beautiful_lederberg[297785]: 167 167
Oct 11 00:01:06 np0005480824 systemd[1]: libpod-9906174d0527c9faa870314ce8501d6cbaf84264171d3d9abb5a75b60d8d3937.scope: Deactivated successfully.
Oct 11 00:01:06 np0005480824 podman[297768]: 2025-10-11 04:01:06.523579116 +0000 UTC m=+0.139127421 container attach 9906174d0527c9faa870314ce8501d6cbaf84264171d3d9abb5a75b60d8d3937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lederberg, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 00:01:06 np0005480824 podman[297768]: 2025-10-11 04:01:06.524372735 +0000 UTC m=+0.139921030 container died 9906174d0527c9faa870314ce8501d6cbaf84264171d3d9abb5a75b60d8d3937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lederberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 00:01:06 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d0a03ff1add5937803f43485e65fa49019a9720605a6645126d184a16aa8f201-merged.mount: Deactivated successfully.
Oct 11 00:01:06 np0005480824 podman[297768]: 2025-10-11 04:01:06.599128654 +0000 UTC m=+0.214676979 container remove 9906174d0527c9faa870314ce8501d6cbaf84264171d3d9abb5a75b60d8d3937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 00:01:06 np0005480824 systemd[1]: libpod-conmon-9906174d0527c9faa870314ce8501d6cbaf84264171d3d9abb5a75b60d8d3937.scope: Deactivated successfully.
Oct 11 00:01:06 np0005480824 podman[297808]: 2025-10-11 04:01:06.817436827 +0000 UTC m=+0.055334064 container create 5c77e8e460d6b5bf440ab4ebfd2d5ba9fd6cb914831156e7e324093e7d1cd7c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:01:06 np0005480824 systemd[1]: Started libpod-conmon-5c77e8e460d6b5bf440ab4ebfd2d5ba9fd6cb914831156e7e324093e7d1cd7c4.scope.
Oct 11 00:01:06 np0005480824 podman[297808]: 2025-10-11 04:01:06.798053641 +0000 UTC m=+0.035950878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:01:06 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:01:06 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a30d60b75a83dc4e31bd5fc2869c54836a13a0e61d0ea5de2d51ee5805b9be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:01:06 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a30d60b75a83dc4e31bd5fc2869c54836a13a0e61d0ea5de2d51ee5805b9be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:01:06 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a30d60b75a83dc4e31bd5fc2869c54836a13a0e61d0ea5de2d51ee5805b9be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:01:06 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a30d60b75a83dc4e31bd5fc2869c54836a13a0e61d0ea5de2d51ee5805b9be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:01:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Oct 11 00:01:06 np0005480824 podman[297808]: 2025-10-11 04:01:06.920443277 +0000 UTC m=+0.158340604 container init 5c77e8e460d6b5bf440ab4ebfd2d5ba9fd6cb914831156e7e324093e7d1cd7c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 00:01:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Oct 11 00:01:06 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Oct 11 00:01:06 np0005480824 podman[297808]: 2025-10-11 04:01:06.929899284 +0000 UTC m=+0.167796491 container start 5c77e8e460d6b5bf440ab4ebfd2d5ba9fd6cb914831156e7e324093e7d1cd7c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 00:01:06 np0005480824 podman[297808]: 2025-10-11 04:01:06.93449446 +0000 UTC m=+0.172391727 container attach 5c77e8e460d6b5bf440ab4ebfd2d5ba9fd6cb914831156e7e324093e7d1cd7c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 00:01:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 39 KiB/s rd, 3.2 KiB/s wr, 60 op/s
Oct 11 00:01:07 np0005480824 festive_rubin[297825]: {
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:    "0": [
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:        {
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "devices": [
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "/dev/loop3"
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            ],
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_name": "ceph_lv0",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_size": "21470642176",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "name": "ceph_lv0",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "tags": {
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.cluster_name": "ceph",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.crush_device_class": "",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.encrypted": "0",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.osd_id": "0",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.type": "block",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.vdo": "0"
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            },
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "type": "block",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "vg_name": "ceph_vg0"
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:        }
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:    ],
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:    "1": [
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:        {
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "devices": [
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "/dev/loop4"
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            ],
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_name": "ceph_lv1",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_size": "21470642176",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "name": "ceph_lv1",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "tags": {
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.cluster_name": "ceph",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.crush_device_class": "",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.encrypted": "0",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.osd_id": "1",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.type": "block",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.vdo": "0"
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            },
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "type": "block",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "vg_name": "ceph_vg1"
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:        }
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:    ],
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:    "2": [
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:        {
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "devices": [
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "/dev/loop5"
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            ],
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_name": "ceph_lv2",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_size": "21470642176",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "name": "ceph_lv2",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "tags": {
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.cluster_name": "ceph",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.crush_device_class": "",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.encrypted": "0",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.osd_id": "2",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.type": "block",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:                "ceph.vdo": "0"
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            },
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "type": "block",
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:            "vg_name": "ceph_vg2"
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:        }
Oct 11 00:01:07 np0005480824 festive_rubin[297825]:    ]
Oct 11 00:01:07 np0005480824 festive_rubin[297825]: }
Oct 11 00:01:07 np0005480824 systemd[1]: libpod-5c77e8e460d6b5bf440ab4ebfd2d5ba9fd6cb914831156e7e324093e7d1cd7c4.scope: Deactivated successfully.
Oct 11 00:01:07 np0005480824 podman[297808]: 2025-10-11 04:01:07.729806947 +0000 UTC m=+0.967704154 container died 5c77e8e460d6b5bf440ab4ebfd2d5ba9fd6cb914831156e7e324093e7d1cd7c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:01:07 np0005480824 systemd[1]: var-lib-containers-storage-overlay-57a30d60b75a83dc4e31bd5fc2869c54836a13a0e61d0ea5de2d51ee5805b9be-merged.mount: Deactivated successfully.
Oct 11 00:01:07 np0005480824 podman[297808]: 2025-10-11 04:01:07.814378032 +0000 UTC m=+1.052275289 container remove 5c77e8e460d6b5bf440ab4ebfd2d5ba9fd6cb914831156e7e324093e7d1cd7c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 11 00:01:07 np0005480824 systemd[1]: libpod-conmon-5c77e8e460d6b5bf440ab4ebfd2d5ba9fd6cb914831156e7e324093e7d1cd7c4.scope: Deactivated successfully.
Oct 11 00:01:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Oct 11 00:01:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Oct 11 00:01:07 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Oct 11 00:01:08 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:08.110 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:01:08 np0005480824 podman[297919]: 2025-10-11 04:01:08.243334911 +0000 UTC m=+0.093277517 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 00:01:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:01:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3298445240' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:01:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:01:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3298445240' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:01:08 np0005480824 podman[297918]: 2025-10-11 04:01:08.264881727 +0000 UTC m=+0.107899724 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:01:08 np0005480824 podman[298019]: 2025-10-11 04:01:08.626131958 +0000 UTC m=+0.046352458 container create d03646c8b6f9dce2bfa97efb5cbf5795c11ec4f8b4d951f2a3fb2ad1709aa0da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mahavira, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:01:08 np0005480824 systemd[1]: Started libpod-conmon-d03646c8b6f9dce2bfa97efb5cbf5795c11ec4f8b4d951f2a3fb2ad1709aa0da.scope.
Oct 11 00:01:08 np0005480824 podman[298019]: 2025-10-11 04:01:08.604902759 +0000 UTC m=+0.025123269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:01:08 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:01:08 np0005480824 podman[298019]: 2025-10-11 04:01:08.753338354 +0000 UTC m=+0.173558894 container init d03646c8b6f9dce2bfa97efb5cbf5795c11ec4f8b4d951f2a3fb2ad1709aa0da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:01:08 np0005480824 podman[298019]: 2025-10-11 04:01:08.760778365 +0000 UTC m=+0.180998855 container start d03646c8b6f9dce2bfa97efb5cbf5795c11ec4f8b4d951f2a3fb2ad1709aa0da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mahavira, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 00:01:08 np0005480824 podman[298019]: 2025-10-11 04:01:08.764160663 +0000 UTC m=+0.184381223 container attach d03646c8b6f9dce2bfa97efb5cbf5795c11ec4f8b4d951f2a3fb2ad1709aa0da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:01:08 np0005480824 silly_mahavira[298035]: 167 167
Oct 11 00:01:08 np0005480824 systemd[1]: libpod-d03646c8b6f9dce2bfa97efb5cbf5795c11ec4f8b4d951f2a3fb2ad1709aa0da.scope: Deactivated successfully.
Oct 11 00:01:08 np0005480824 podman[298019]: 2025-10-11 04:01:08.768913862 +0000 UTC m=+0.189134352 container died d03646c8b6f9dce2bfa97efb5cbf5795c11ec4f8b4d951f2a3fb2ad1709aa0da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mahavira, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 00:01:08 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9d0895bad71e6d519be440205f3f476594599fb954ac3e40992baadaa442f5a4-merged.mount: Deactivated successfully.
Oct 11 00:01:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:01:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2202815457' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:01:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:01:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2202815457' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:01:08 np0005480824 podman[298019]: 2025-10-11 04:01:08.810582461 +0000 UTC m=+0.230802941 container remove d03646c8b6f9dce2bfa97efb5cbf5795c11ec4f8b4d951f2a3fb2ad1709aa0da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:01:08 np0005480824 systemd[1]: libpod-conmon-d03646c8b6f9dce2bfa97efb5cbf5795c11ec4f8b4d951f2a3fb2ad1709aa0da.scope: Deactivated successfully.
Oct 11 00:01:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 6 active+clean+snaptrim, 20 active+clean+snaptrim_wait, 295 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 175 KiB/s rd, 17 KiB/s wr, 249 op/s
Oct 11 00:01:09 np0005480824 podman[298058]: 2025-10-11 04:01:09.040963781 +0000 UTC m=+0.057841332 container create b825a786bea39c4606edf67bc653b2f002949df2281dc4f4e2bdc2a4f1537443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 00:01:09 np0005480824 systemd[1]: Started libpod-conmon-b825a786bea39c4606edf67bc653b2f002949df2281dc4f4e2bdc2a4f1537443.scope.
Oct 11 00:01:09 np0005480824 podman[298058]: 2025-10-11 04:01:09.020408159 +0000 UTC m=+0.037285730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.129 2 DEBUG oslo_concurrency.lockutils [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.131 2 DEBUG oslo_concurrency.lockutils [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.131 2 DEBUG oslo_concurrency.lockutils [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.131 2 DEBUG oslo_concurrency.lockutils [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.132 2 DEBUG oslo_concurrency.lockutils [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:01:09 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.135 2 INFO nova.compute.manager [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Terminating instance#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.137 2 DEBUG nova.compute.manager [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 00:01:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae1cecce5c39d04a899da86cc1bd1ce5df876c504ad9cfa188a65dfca656484/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:01:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae1cecce5c39d04a899da86cc1bd1ce5df876c504ad9cfa188a65dfca656484/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:01:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae1cecce5c39d04a899da86cc1bd1ce5df876c504ad9cfa188a65dfca656484/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:01:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae1cecce5c39d04a899da86cc1bd1ce5df876c504ad9cfa188a65dfca656484/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:01:09 np0005480824 podman[298058]: 2025-10-11 04:01:09.158829262 +0000 UTC m=+0.175706863 container init b825a786bea39c4606edf67bc653b2f002949df2281dc4f4e2bdc2a4f1537443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_clarke, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:01:09 np0005480824 podman[298058]: 2025-10-11 04:01:09.176891668 +0000 UTC m=+0.193769219 container start b825a786bea39c4606edf67bc653b2f002949df2281dc4f4e2bdc2a4f1537443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_clarke, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 00:01:09 np0005480824 podman[298058]: 2025-10-11 04:01:09.181081954 +0000 UTC m=+0.197959555 container attach b825a786bea39c4606edf67bc653b2f002949df2281dc4f4e2bdc2a4f1537443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_clarke, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 00:01:09 np0005480824 kernel: tapbfcdfd4b-fc (unregistering): left promiscuous mode
Oct 11 00:01:09 np0005480824 NetworkManager[44969]: <info>  [1760155269.2387] device (tapbfcdfd4b-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:09 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:09Z|00229|binding|INFO|Releasing lport bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 from this chassis (sb_readonly=0)
Oct 11 00:01:09 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:09Z|00230|binding|INFO|Setting lport bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 down in Southbound
Oct 11 00:01:09 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:09Z|00231|binding|INFO|Removing iface tapbfcdfd4b-fc ovn-installed in OVS
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:09 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:09.263 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:5e:e0 10.100.0.3'], port_security=['fa:16:3e:91:5e:e0 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd5aa10c6-5a8f-419f-8f0d-89bc251d13b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4dd4975fff494ac1b725d3dfb95c6006', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e4ef8f7d-3ac8-4d30-8829-c4ed9b98b54a e9a34696-927d-4453-87ad-83f2f968d44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.184'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ab2ca03-a847-453e-af7d-73f5101b8a17, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=bfcdfd4b-fcfe-45df-af5d-b65bf0a23633) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:01:09 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:09.265 162245 INFO neutron.agent.ovn.metadata.agent [-] Port bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 in datapath b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc unbound from our chassis#033[00m
Oct 11 00:01:09 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:09.266 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 00:01:09 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:09.268 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[97a94dbb-7ee2-40db-8c9d-306b3fcb0288]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:09 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:09.268 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc namespace which is not needed anymore#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:09 np0005480824 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 11 00:01:09 np0005480824 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Consumed 18.645s CPU time.
Oct 11 00:01:09 np0005480824 systemd-machined[215071]: Machine qemu-22-instance-00000016 terminated.
Oct 11 00:01:09 np0005480824 NetworkManager[44969]: <info>  [1760155269.3764] manager: (tapbfcdfd4b-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/127)
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.421 2 INFO nova.virt.libvirt.driver [-] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Instance destroyed successfully.#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.421 2 DEBUG nova.objects.instance [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lazy-loading 'resources' on Instance uuid d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.444 2 DEBUG nova.virt.libvirt.vif [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T03:59:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-904123251',display_name='tempest-SnapshotDataIntegrityTests-server-904123251',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-904123251',id=22,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKuna6dqBW7XaVzn9KR64NaVEmsQ5ulNl9/aDNcPGKoJrbjwghQAc5yJxj76ka5H3pzcoTC+gcMjG5T/OgM2nFxnE1kE2FMmYCpZF82zIpeYZgF/1YNvbKCgNcN4k8m/JQ==',key_name='tempest-keypair-1580539450',keypairs=<?>,launch_index=0,launched_at=2025-10-11T03:59:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4dd4975fff494ac1b725d3dfb95c6006',ramdisk_id='',reservation_id='r-lvzdhhy7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SnapshotDataIntegrityTests-1651128782',owner_user_name='tempest-SnapshotDataIntegrityTests-1651128782-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T03:59:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5d742fae0903462eaf9109fdb5176357',uuid=d5aa10c6-5a8f-419f-8f0d-89bc251d13b1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "address": "fa:16:3e:91:5e:e0", "network": {"id": "b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-707028039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dd4975fff494ac1b725d3dfb95c6006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfcdfd4b-fc", "ovs_interfaceid": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.444 2 DEBUG nova.network.os_vif_util [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Converting VIF {"id": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "address": "fa:16:3e:91:5e:e0", "network": {"id": "b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-707028039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4dd4975fff494ac1b725d3dfb95c6006", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfcdfd4b-fc", "ovs_interfaceid": "bfcdfd4b-fcfe-45df-af5d-b65bf0a23633", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.445 2 DEBUG nova.network.os_vif_util [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:91:5e:e0,bridge_name='br-int',has_traffic_filtering=True,id=bfcdfd4b-fcfe-45df-af5d-b65bf0a23633,network=Network(b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfcdfd4b-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.446 2 DEBUG os_vif [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:91:5e:e0,bridge_name='br-int',has_traffic_filtering=True,id=bfcdfd4b-fcfe-45df-af5d-b65bf0a23633,network=Network(b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfcdfd4b-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.449 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfcdfd4b-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.454 2 INFO os_vif [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:91:5e:e0,bridge_name='br-int',has_traffic_filtering=True,id=bfcdfd4b-fcfe-45df-af5d-b65bf0a23633,network=Network(b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfcdfd4b-fc')#033[00m
Oct 11 00:01:09 np0005480824 neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc[294589]: [NOTICE]   (294593) : haproxy version is 2.8.14-c23fe91
Oct 11 00:01:09 np0005480824 neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc[294589]: [NOTICE]   (294593) : path to executable is /usr/sbin/haproxy
Oct 11 00:01:09 np0005480824 neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc[294589]: [WARNING]  (294593) : Exiting Master process...
Oct 11 00:01:09 np0005480824 neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc[294589]: [WARNING]  (294593) : Exiting Master process...
Oct 11 00:01:09 np0005480824 neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc[294589]: [ALERT]    (294593) : Current worker (294595) exited with code 143 (Terminated)
Oct 11 00:01:09 np0005480824 neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc[294589]: [WARNING]  (294593) : All workers exited. Exiting... (0)
Oct 11 00:01:09 np0005480824 systemd[1]: libpod-c5d22a16e97784e2e9126b49ef61b89dd818f55a9ede383bfb904f350896e027.scope: Deactivated successfully.
Oct 11 00:01:09 np0005480824 podman[298105]: 2025-10-11 04:01:09.4690774 +0000 UTC m=+0.078803323 container died c5d22a16e97784e2e9126b49ef61b89dd818f55a9ede383bfb904f350896e027 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:01:09 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5d22a16e97784e2e9126b49ef61b89dd818f55a9ede383bfb904f350896e027-userdata-shm.mount: Deactivated successfully.
Oct 11 00:01:09 np0005480824 systemd[1]: var-lib-containers-storage-overlay-344c9d4e496e165a49e75245dfeb9e0240db54c7b9950169735ac8a8deaf40a1-merged.mount: Deactivated successfully.
Oct 11 00:01:09 np0005480824 podman[298105]: 2025-10-11 04:01:09.570656358 +0000 UTC m=+0.180382281 container cleanup c5d22a16e97784e2e9126b49ef61b89dd818f55a9ede383bfb904f350896e027 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 00:01:09 np0005480824 systemd[1]: libpod-conmon-c5d22a16e97784e2e9126b49ef61b89dd818f55a9ede383bfb904f350896e027.scope: Deactivated successfully.
Oct 11 00:01:09 np0005480824 podman[298156]: 2025-10-11 04:01:09.677140067 +0000 UTC m=+0.073993113 container remove c5d22a16e97784e2e9126b49ef61b89dd818f55a9ede383bfb904f350896e027 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 11 00:01:09 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:09.691 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4c527edf-b3cd-4af2-b746-04e3b7d09f7b]: (4, ('Sat Oct 11 04:01:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc (c5d22a16e97784e2e9126b49ef61b89dd818f55a9ede383bfb904f350896e027)\nc5d22a16e97784e2e9126b49ef61b89dd818f55a9ede383bfb904f350896e027\nSat Oct 11 04:01:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc (c5d22a16e97784e2e9126b49ef61b89dd818f55a9ede383bfb904f350896e027)\nc5d22a16e97784e2e9126b49ef61b89dd818f55a9ede383bfb904f350896e027\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:09 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:09.693 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b310bb54-f908-4c3a-9c8e-7002c4b3464f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:09 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:09.694 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07c8c86-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:09 np0005480824 kernel: tapb07c8c86-70: left promiscuous mode
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:09 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:09.728 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5bbf63-ee91-4249-bc26-5e007df49141]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:09 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:09.758 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfaf0f0-bb96-42dc-befe-e9e2fb5bb0e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:09 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:09.760 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[f304ab04-a876-41b7-bd2b-a8bba9ed39f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:09 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:09.777 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[269905c9-2c51-4f21-be37-e6db2f698754]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465098, 'reachable_time': 41343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298169, 'error': None, 'target': 'ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:09 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:09.780 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b07c8c86-7240-4ba7-b1d8-b3c98c1e89bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 00:01:09 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:09.780 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[a001b3f2-32ea-49aa-8864-40e405eea9c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:09 np0005480824 systemd[1]: run-netns-ovnmeta\x2db07c8c86\x2d7240\x2d4ba7\x2db1d8\x2db3c98c1e89bc.mount: Deactivated successfully.
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.910 2 INFO nova.virt.libvirt.driver [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Deleting instance files /var/lib/nova/instances/d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_del#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.911 2 INFO nova.virt.libvirt.driver [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Deletion of /var/lib/nova/instances/d5aa10c6-5a8f-419f-8f0d-89bc251d13b1_del complete#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.967 2 DEBUG nova.compute.manager [req-3ffd631e-748d-4cdc-9f66-8ae3e49be2e9 req-7f3b1642-0703-4edc-abc3-0467a6976d68 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Received event network-vif-unplugged-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.967 2 DEBUG oslo_concurrency.lockutils [req-3ffd631e-748d-4cdc-9f66-8ae3e49be2e9 req-7f3b1642-0703-4edc-abc3-0467a6976d68 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.967 2 DEBUG oslo_concurrency.lockutils [req-3ffd631e-748d-4cdc-9f66-8ae3e49be2e9 req-7f3b1642-0703-4edc-abc3-0467a6976d68 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.967 2 DEBUG oslo_concurrency.lockutils [req-3ffd631e-748d-4cdc-9f66-8ae3e49be2e9 req-7f3b1642-0703-4edc-abc3-0467a6976d68 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.968 2 DEBUG nova.compute.manager [req-3ffd631e-748d-4cdc-9f66-8ae3e49be2e9 req-7f3b1642-0703-4edc-abc3-0467a6976d68 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] No waiting events found dispatching network-vif-unplugged-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.968 2 DEBUG nova.compute.manager [req-3ffd631e-748d-4cdc-9f66-8ae3e49be2e9 req-7f3b1642-0703-4edc-abc3-0467a6976d68 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Received event network-vif-unplugged-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.990 2 INFO nova.compute.manager [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.992 2 DEBUG oslo.service.loopingcall [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.993 2 DEBUG nova.compute.manager [-] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 00:01:09 np0005480824 nova_compute[260089]: 2025-10-11 04:01:09.993 2 DEBUG nova.network.neutron [-] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 00:01:10 np0005480824 competent_clarke[298075]: {
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "osd_id": 0,
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "type": "bluestore"
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:    },
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "osd_id": 1,
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "type": "bluestore"
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:    },
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "osd_id": 2,
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:        "type": "bluestore"
Oct 11 00:01:10 np0005480824 competent_clarke[298075]:    }
Oct 11 00:01:10 np0005480824 competent_clarke[298075]: }
Oct 11 00:01:10 np0005480824 systemd[1]: libpod-b825a786bea39c4606edf67bc653b2f002949df2281dc4f4e2bdc2a4f1537443.scope: Deactivated successfully.
Oct 11 00:01:10 np0005480824 systemd[1]: libpod-b825a786bea39c4606edf67bc653b2f002949df2281dc4f4e2bdc2a4f1537443.scope: Consumed 1.079s CPU time.
Oct 11 00:01:10 np0005480824 conmon[298075]: conmon b825a786bea39c4606ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b825a786bea39c4606edf67bc653b2f002949df2281dc4f4e2bdc2a4f1537443.scope/container/memory.events
Oct 11 00:01:10 np0005480824 podman[298058]: 2025-10-11 04:01:10.290133179 +0000 UTC m=+1.307010730 container died b825a786bea39c4606edf67bc653b2f002949df2281dc4f4e2bdc2a4f1537443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:01:10 np0005480824 systemd[1]: var-lib-containers-storage-overlay-3ae1cecce5c39d04a899da86cc1bd1ce5df876c504ad9cfa188a65dfca656484-merged.mount: Deactivated successfully.
Oct 11 00:01:10 np0005480824 podman[298058]: 2025-10-11 04:01:10.351764667 +0000 UTC m=+1.368642218 container remove b825a786bea39c4606edf67bc653b2f002949df2281dc4f4e2bdc2a4f1537443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_clarke, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 00:01:10 np0005480824 systemd[1]: libpod-conmon-b825a786bea39c4606edf67bc653b2f002949df2281dc4f4e2bdc2a4f1537443.scope: Deactivated successfully.
Oct 11 00:01:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 00:01:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:01:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 00:01:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:01:10 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 9ad050a2-5024-40e7-b939-2708c1e59448 does not exist
Oct 11 00:01:10 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev fbcc2078-d9c3-47eb-a1c6-6bd5c5a5afa3 does not exist
Oct 11 00:01:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:10.507 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:01:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:10.509 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:01:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:10.510 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:01:10 np0005480824 nova_compute[260089]: 2025-10-11 04:01:10.598 2 DEBUG nova.network.neutron [-] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:01:10 np0005480824 nova_compute[260089]: 2025-10-11 04:01:10.617 2 INFO nova.compute.manager [-] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Took 0.62 seconds to deallocate network for instance.#033[00m
Oct 11 00:01:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:01:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Oct 11 00:01:10 np0005480824 nova_compute[260089]: 2025-10-11 04:01:10.659 2 DEBUG oslo_concurrency.lockutils [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:01:10 np0005480824 nova_compute[260089]: 2025-10-11 04:01:10.660 2 DEBUG oslo_concurrency.lockutils [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:01:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Oct 11 00:01:10 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Oct 11 00:01:10 np0005480824 nova_compute[260089]: 2025-10-11 04:01:10.742 2 DEBUG oslo_concurrency.processutils [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:01:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 6 active+clean+snaptrim, 20 active+clean+snaptrim_wait, 295 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 134 KiB/s rd, 13 KiB/s wr, 190 op/s
Oct 11 00:01:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:01:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1675166031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:01:11 np0005480824 nova_compute[260089]: 2025-10-11 04:01:11.224 2 DEBUG oslo_concurrency.processutils [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:01:11 np0005480824 nova_compute[260089]: 2025-10-11 04:01:11.232 2 DEBUG nova.compute.provider_tree [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:01:11 np0005480824 nova_compute[260089]: 2025-10-11 04:01:11.251 2 DEBUG nova.scheduler.client.report [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:01:11 np0005480824 nova_compute[260089]: 2025-10-11 04:01:11.318 2 DEBUG oslo_concurrency.lockutils [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:01:11 np0005480824 nova_compute[260089]: 2025-10-11 04:01:11.427 2 INFO nova.scheduler.client.report [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Deleted allocations for instance d5aa10c6-5a8f-419f-8f0d-89bc251d13b1#033[00m
Oct 11 00:01:11 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:01:11 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:01:11 np0005480824 nova_compute[260089]: 2025-10-11 04:01:11.498 2 DEBUG oslo_concurrency.lockutils [None req-083cd2c7-d1e8-493f-acd6-ecad39185c2d 5d742fae0903462eaf9109fdb5176357 4dd4975fff494ac1b725d3dfb95c6006 - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:01:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:01:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/910842045' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:01:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:01:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/910842045' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:01:12 np0005480824 nova_compute[260089]: 2025-10-11 04:01:12.038 2 DEBUG nova.compute.manager [req-d347f04c-c85b-4655-9b82-157a7d083ae0 req-a5af99b2-0318-4e93-8c64-9d0a23d461c5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Received event network-vif-plugged-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:01:12 np0005480824 nova_compute[260089]: 2025-10-11 04:01:12.038 2 DEBUG oslo_concurrency.lockutils [req-d347f04c-c85b-4655-9b82-157a7d083ae0 req-a5af99b2-0318-4e93-8c64-9d0a23d461c5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:01:12 np0005480824 nova_compute[260089]: 2025-10-11 04:01:12.038 2 DEBUG oslo_concurrency.lockutils [req-d347f04c-c85b-4655-9b82-157a7d083ae0 req-a5af99b2-0318-4e93-8c64-9d0a23d461c5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:01:12 np0005480824 nova_compute[260089]: 2025-10-11 04:01:12.039 2 DEBUG oslo_concurrency.lockutils [req-d347f04c-c85b-4655-9b82-157a7d083ae0 req-a5af99b2-0318-4e93-8c64-9d0a23d461c5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "d5aa10c6-5a8f-419f-8f0d-89bc251d13b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:01:12 np0005480824 nova_compute[260089]: 2025-10-11 04:01:12.039 2 DEBUG nova.compute.manager [req-d347f04c-c85b-4655-9b82-157a7d083ae0 req-a5af99b2-0318-4e93-8c64-9d0a23d461c5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] No waiting events found dispatching network-vif-plugged-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:01:12 np0005480824 nova_compute[260089]: 2025-10-11 04:01:12.039 2 WARNING nova.compute.manager [req-d347f04c-c85b-4655-9b82-157a7d083ae0 req-a5af99b2-0318-4e93-8c64-9d0a23d461c5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Received unexpected event network-vif-plugged-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 00:01:12 np0005480824 nova_compute[260089]: 2025-10-11 04:01:12.039 2 DEBUG nova.compute.manager [req-d347f04c-c85b-4655-9b82-157a7d083ae0 req-a5af99b2-0318-4e93-8c64-9d0a23d461c5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Received event network-vif-deleted-bfcdfd4b-fcfe-45df-af5d-b65bf0a23633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:01:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 270 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 31 KiB/s wr, 450 op/s
Oct 11 00:01:13 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:13Z|00056|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.3
Oct 11 00:01:13 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:13Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:4f:8c:b6 10.100.0.3
Oct 11 00:01:14 np0005480824 nova_compute[260089]: 2025-10-11 04:01:14.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 270 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 23 KiB/s wr, 338 op/s
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.668776) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155275668809, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2358, "num_deletes": 272, "total_data_size": 3437428, "memory_usage": 3492640, "flush_reason": "Manual Compaction"}
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155275687025, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3360622, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31696, "largest_seqno": 34053, "table_properties": {"data_size": 3349736, "index_size": 7060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23555, "raw_average_key_size": 21, "raw_value_size": 3327631, "raw_average_value_size": 3030, "num_data_blocks": 307, "num_entries": 1098, "num_filter_entries": 1098, "num_deletions": 272, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155102, "oldest_key_time": 1760155102, "file_creation_time": 1760155275, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 18288 microseconds, and 6785 cpu microseconds.
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.687064) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3360622 bytes OK
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.687083) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.689113) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.689126) EVENT_LOG_v1 {"time_micros": 1760155275689122, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.689141) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3427190, prev total WAL file size 3427190, number of live WAL files 2.
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.690002) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3281KB)], [65(10219KB)]
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155275690043, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 13825384, "oldest_snapshot_seqno": -1}
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6602 keys, 12123757 bytes, temperature: kUnknown
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155275787180, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 12123757, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12071225, "index_size": 34948, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 165425, "raw_average_key_size": 25, "raw_value_size": 11944282, "raw_average_value_size": 1809, "num_data_blocks": 1409, "num_entries": 6602, "num_filter_entries": 6602, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760155275, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.787489) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 12123757 bytes
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.789162) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.2 rd, 124.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.0 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 7143, records dropped: 541 output_compression: NoCompression
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.789183) EVENT_LOG_v1 {"time_micros": 1760155275789173, "job": 36, "event": "compaction_finished", "compaction_time_micros": 97224, "compaction_time_cpu_micros": 48330, "output_level": 6, "num_output_files": 1, "total_output_size": 12123757, "num_input_records": 7143, "num_output_records": 6602, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155275790094, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155275792584, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.689896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.792659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.792664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.792667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.792669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:01:15 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:15.792672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:01:16 np0005480824 podman[298282]: 2025-10-11 04:01:16.084043682 +0000 UTC m=+0.137280159 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 11 00:01:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 270 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 986 KiB/s rd, 16 KiB/s wr, 248 op/s
Oct 11 00:01:16 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:16Z|00058|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.3
Oct 11 00:01:16 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:16Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:4f:8c:b6 10.100.0.3
Oct 11 00:01:17 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:17Z|00232|binding|INFO|Releasing lport 182275c4-a015-4f7a-8877-9961b2382f67 from this chassis (sb_readonly=0)
Oct 11 00:01:17 np0005480824 nova_compute[260089]: 2025-10-11 04:01:17.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:18 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:18Z|00233|binding|INFO|Releasing lport 182275c4-a015-4f7a-8877-9961b2382f67 from this chassis (sb_readonly=0)
Oct 11 00:01:18 np0005480824 nova_compute[260089]: 2025-10-11 04:01:18.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:18 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:18Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4f:8c:b6 10.100.0.3
Oct 11 00:01:18 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:18Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:8c:b6 10.100.0.3
Oct 11 00:01:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 270 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 956 KiB/s rd, 35 KiB/s wr, 242 op/s
Oct 11 00:01:19 np0005480824 nova_compute[260089]: 2025-10-11 04:01:19.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:01:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 270 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 791 KiB/s rd, 29 KiB/s wr, 200 op/s
Oct 11 00:01:21 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:21Z|00234|binding|INFO|Releasing lport 182275c4-a015-4f7a-8877-9961b2382f67 from this chassis (sb_readonly=0)
Oct 11 00:01:21 np0005480824 nova_compute[260089]: 2025-10-11 04:01:21.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 270 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 17 KiB/s wr, 7 op/s
Oct 11 00:01:23 np0005480824 podman[298308]: 2025-10-11 04:01:23.018627538 +0000 UTC m=+0.073108123 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 00:01:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Oct 11 00:01:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Oct 11 00:01:24 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Oct 11 00:01:24 np0005480824 nova_compute[260089]: 2025-10-11 04:01:24.418 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155269.4168508, d5aa10c6-5a8f-419f-8f0d-89bc251d13b1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:01:24 np0005480824 nova_compute[260089]: 2025-10-11 04:01:24.418 2 INFO nova.compute.manager [-] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] VM Stopped (Lifecycle Event)#033[00m
Oct 11 00:01:24 np0005480824 nova_compute[260089]: 2025-10-11 04:01:24.440 2 DEBUG nova.compute.manager [None req-2c7035cb-bdac-4515-8f01-dcdcbdc06133 - - - - - -] [instance: d5aa10c6-5a8f-419f-8f0d-89bc251d13b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:01:24 np0005480824 nova_compute[260089]: 2025-10-11 04:01:24.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:01:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3090358656' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:01:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:01:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3090358656' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:01:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 270 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 18 KiB/s wr, 8 op/s
Oct 11 00:01:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:01:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Oct 11 00:01:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Oct 11 00:01:26 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Oct 11 00:01:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 270 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 4.9 KiB/s rd, 8.4 KiB/s wr, 11 op/s
Oct 11 00:01:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:01:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3306444631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:01:26 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:01:26 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3306444631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:01:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:01:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:01:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:01:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:01:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:01:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:01:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_04:01:27
Oct 11 00:01:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 00:01:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 11 00:01:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.meta', '.rgw.root', 'backups']
Oct 11 00:01:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 11 00:01:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:01:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3270016888' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:01:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:01:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3270016888' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:01:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 00:01:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:01:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 00:01:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:01:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:01:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:01:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:01:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:01:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:01:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:01:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 270 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 19 KiB/s wr, 48 op/s
Oct 11 00:01:29 np0005480824 nova_compute[260089]: 2025-10-11 04:01:29.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:01:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Oct 11 00:01:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Oct 11 00:01:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Oct 11 00:01:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 270 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 21 KiB/s wr, 53 op/s
Oct 11 00:01:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 270 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 23 KiB/s wr, 66 op/s
Oct 11 00:01:34 np0005480824 nova_compute[260089]: 2025-10-11 04:01:34.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:34 np0005480824 nova_compute[260089]: 2025-10-11 04:01:34.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 270 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 21 KiB/s wr, 60 op/s
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.490 2 DEBUG oslo_concurrency.lockutils [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "42651a9c-7b98-4ad0-bf9d-430330b33968" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.491 2 DEBUG oslo_concurrency.lockutils [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.491 2 DEBUG oslo_concurrency.lockutils [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.492 2 DEBUG oslo_concurrency.lockutils [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.492 2 DEBUG oslo_concurrency.lockutils [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.493 2 INFO nova.compute.manager [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Terminating instance#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.494 2 DEBUG nova.compute.manager [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 00:01:35 np0005480824 kernel: tap189ca3df-84 (unregistering): left promiscuous mode
Oct 11 00:01:35 np0005480824 NetworkManager[44969]: <info>  [1760155295.5527] device (tap189ca3df-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 00:01:35 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:35Z|00235|binding|INFO|Releasing lport 189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 from this chassis (sb_readonly=0)
Oct 11 00:01:35 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:35Z|00236|binding|INFO|Setting lport 189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 down in Southbound
Oct 11 00:01:35 np0005480824 ovn_controller[152667]: 2025-10-11T04:01:35Z|00237|binding|INFO|Removing iface tap189ca3df-84 ovn-installed in OVS
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:35.570 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:8c:b6 10.100.0.3'], port_security=['fa:16:3e:4f:8c:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '42651a9c-7b98-4ad0-bf9d-430330b33968', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e73ded2f2ee46b4a7485c01ef1b73e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b966caac-3def-4c2a-badc-a92b0de92fd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f608fb9-f693-4a11-9617-6172f3d025df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:01:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:35.571 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 in datapath 15a62ee0-8e34-4e49-990e-246b4ef9e0c6 unbound from our chassis#033[00m
Oct 11 00:01:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:35.572 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15a62ee0-8e34-4e49-990e-246b4ef9e0c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 00:01:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:35.573 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[a67af9fb-9835-4098-95e8-efaf4ed1ae05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:35.574 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 namespace which is not needed anymore#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:35 np0005480824 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct 11 00:01:35 np0005480824 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Consumed 16.391s CPU time.
Oct 11 00:01:35 np0005480824 systemd-machined[215071]: Machine qemu-25-instance-00000019 terminated.
Oct 11 00:01:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:01:35 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[297100]: [NOTICE]   (297104) : haproxy version is 2.8.14-c23fe91
Oct 11 00:01:35 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[297100]: [NOTICE]   (297104) : path to executable is /usr/sbin/haproxy
Oct 11 00:01:35 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[297100]: [WARNING]  (297104) : Exiting Master process...
Oct 11 00:01:35 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[297100]: [WARNING]  (297104) : Exiting Master process...
Oct 11 00:01:35 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[297100]: [ALERT]    (297104) : Current worker (297106) exited with code 143 (Terminated)
Oct 11 00:01:35 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[297100]: [WARNING]  (297104) : All workers exited. Exiting... (0)
Oct 11 00:01:35 np0005480824 systemd[1]: libpod-28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6.scope: Deactivated successfully.
Oct 11 00:01:35 np0005480824 conmon[297100]: conmon 28eb802eeccb0fccea71 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6.scope/container/memory.events
Oct 11 00:01:35 np0005480824 podman[298353]: 2025-10-11 04:01:35.707155308 +0000 UTC m=+0.042689503 container died 28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.732 2 INFO nova.virt.libvirt.driver [-] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Instance destroyed successfully.#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.732 2 DEBUG nova.objects.instance [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lazy-loading 'resources' on Instance uuid 42651a9c-7b98-4ad0-bf9d-430330b33968 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:01:35 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6-userdata-shm.mount: Deactivated successfully.
Oct 11 00:01:35 np0005480824 systemd[1]: var-lib-containers-storage-overlay-878c19085e3530befade517f046119ac39a8b06e9f1bc3a674acdfdc6d24de9e-merged.mount: Deactivated successfully.
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.748 2 DEBUG nova.virt.libvirt.vif [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1118806847',display_name='tempest-TransferEncryptedVolumeTest-server-1118806847',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1118806847',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSd/imVrPUoZZ0pPNaeX2vqRyFwUZkYkGtRIGLvkZ+JmbpGCVAFlpb2xMevRN2guCRk7QItwPxlNbBPPCGkv6m7D9V9P6ik2vYr9GNZ8E+yfq+aSt3aD3tvswV1nTE1Iw==',key_name='tempest-TransferEncryptedVolumeTest-1221687079',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:00:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0e73ded2f2ee46b4a7485c01ef1b73e9',ramdisk_id='',reservation_id='r-0mk5ma9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1815435088',owner_user_name='tempest-TransferEncryptedVolumeTest-1815435088-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:00:59Z,user_data=None,user_id='eccc3f574d354840901d28dad2488bf4',uuid=42651a9c-7b98-4ad0-bf9d-430330b33968,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "address": "fa:16:3e:4f:8c:b6", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189ca3df-84", "ovs_interfaceid": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.748 2 DEBUG nova.network.os_vif_util [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converting VIF {"id": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "address": "fa:16:3e:4f:8c:b6", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189ca3df-84", "ovs_interfaceid": "189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.749 2 DEBUG nova.network.os_vif_util [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4f:8c:b6,bridge_name='br-int',has_traffic_filtering=True,id=189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap189ca3df-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.749 2 DEBUG os_vif [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:8c:b6,bridge_name='br-int',has_traffic_filtering=True,id=189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap189ca3df-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.751 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap189ca3df-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.757 2 INFO os_vif [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:8c:b6,bridge_name='br-int',has_traffic_filtering=True,id=189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap189ca3df-84')#033[00m
Oct 11 00:01:35 np0005480824 podman[298353]: 2025-10-11 04:01:35.758612481 +0000 UTC m=+0.094146676 container cleanup 28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 11 00:01:35 np0005480824 systemd[1]: libpod-conmon-28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6.scope: Deactivated successfully.
Oct 11 00:01:35 np0005480824 podman[298402]: 2025-10-11 04:01:35.862366148 +0000 UTC m=+0.068466325 container remove 28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 11 00:01:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:35.868 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[837a8370-9cdd-47ea-8ae0-5f800dc2e7e2]: (4, ('Sat Oct 11 04:01:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 (28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6)\n28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6\nSat Oct 11 04:01:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 (28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6)\n28eb802eeccb0fccea7114db175620b1ddf41f76c8d64fae585edd913a4ff6a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:35.870 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3c993b-0cb0-4b81-8e49-8ad2e74f7712]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:35.871 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15a62ee0-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:01:35 np0005480824 kernel: tap15a62ee0-80: left promiscuous mode
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:35.891 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[32cc0f47-471b-41bc-bee8-b818203107c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.905 2 DEBUG nova.compute.manager [req-983e9184-5140-4fea-a224-ec5f08047036 req-540ff72c-897c-4c84-b93a-d2015d4970dd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Received event network-vif-unplugged-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.906 2 DEBUG oslo_concurrency.lockutils [req-983e9184-5140-4fea-a224-ec5f08047036 req-540ff72c-897c-4c84-b93a-d2015d4970dd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.906 2 DEBUG oslo_concurrency.lockutils [req-983e9184-5140-4fea-a224-ec5f08047036 req-540ff72c-897c-4c84-b93a-d2015d4970dd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.906 2 DEBUG oslo_concurrency.lockutils [req-983e9184-5140-4fea-a224-ec5f08047036 req-540ff72c-897c-4c84-b93a-d2015d4970dd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.907 2 DEBUG nova.compute.manager [req-983e9184-5140-4fea-a224-ec5f08047036 req-540ff72c-897c-4c84-b93a-d2015d4970dd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] No waiting events found dispatching network-vif-unplugged-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:01:35 np0005480824 nova_compute[260089]: 2025-10-11 04:01:35.907 2 DEBUG nova.compute.manager [req-983e9184-5140-4fea-a224-ec5f08047036 req-540ff72c-897c-4c84-b93a-d2015d4970dd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Received event network-vif-unplugged-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 00:01:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:35.918 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c4085285-f692-429f-a919-2170c0d4d73f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:35.920 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[bd202861-00d5-493f-b441-9235069a4b80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:35.937 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[be6dc343-0455-4b20-9db7-4ab8e897cefd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472388, 'reachable_time': 15529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298428, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:35.940 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 00:01:35 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:01:35.940 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[72c33ad0-2ccd-4712-9b09-942362512e64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:01:35 np0005480824 systemd[1]: run-netns-ovnmeta\x2d15a62ee0\x2d8e34\x2d4e49\x2d990e\x2d246b4ef9e0c6.mount: Deactivated successfully.
Oct 11 00:01:36 np0005480824 nova_compute[260089]: 2025-10-11 04:01:36.002 2 INFO nova.virt.libvirt.driver [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Deleting instance files /var/lib/nova/instances/42651a9c-7b98-4ad0-bf9d-430330b33968_del#033[00m
Oct 11 00:01:36 np0005480824 nova_compute[260089]: 2025-10-11 04:01:36.002 2 INFO nova.virt.libvirt.driver [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Deletion of /var/lib/nova/instances/42651a9c-7b98-4ad0-bf9d-430330b33968_del complete#033[00m
Oct 11 00:01:36 np0005480824 nova_compute[260089]: 2025-10-11 04:01:36.062 2 INFO nova.compute.manager [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 00:01:36 np0005480824 nova_compute[260089]: 2025-10-11 04:01:36.062 2 DEBUG oslo.service.loopingcall [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 00:01:36 np0005480824 nova_compute[260089]: 2025-10-11 04:01:36.063 2 DEBUG nova.compute.manager [-] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 00:01:36 np0005480824 nova_compute[260089]: 2025-10-11 04:01:36.063 2 DEBUG nova.network.neutron [-] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 00:01:36 np0005480824 nova_compute[260089]: 2025-10-11 04:01:36.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:01:36 np0005480824 nova_compute[260089]: 2025-10-11 04:01:36.732 2 DEBUG nova.network.neutron [-] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:01:36 np0005480824 nova_compute[260089]: 2025-10-11 04:01:36.749 2 INFO nova.compute.manager [-] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Took 0.69 seconds to deallocate network for instance.#033[00m
Oct 11 00:01:36 np0005480824 nova_compute[260089]: 2025-10-11 04:01:36.931 2 INFO nova.compute.manager [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Took 0.18 seconds to detach 1 volumes for instance.#033[00m
Oct 11 00:01:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 270 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 12 KiB/s wr, 48 op/s
Oct 11 00:01:36 np0005480824 nova_compute[260089]: 2025-10-11 04:01:36.998 2 DEBUG oslo_concurrency.lockutils [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:01:36 np0005480824 nova_compute[260089]: 2025-10-11 04:01:36.999 2 DEBUG oslo_concurrency.lockutils [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.036 2 DEBUG nova.scheduler.client.report [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Refreshing inventories for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.057 2 DEBUG nova.scheduler.client.report [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Updating ProviderTree inventory for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.058 2 DEBUG nova.compute.provider_tree [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Updating inventory in ProviderTree for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.075 2 DEBUG nova.scheduler.client.report [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Refreshing aggregate associations for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.095 2 DEBUG nova.scheduler.client.report [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Refreshing trait associations for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX,HW_CPU_X86_SHA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.139 2 DEBUG oslo_concurrency.processutils [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.292 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:01:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:01:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079073520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.596 2 DEBUG oslo_concurrency.processutils [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.604 2 DEBUG nova.compute.provider_tree [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.624 2 DEBUG nova.scheduler.client.report [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.650 2 DEBUG oslo_concurrency.lockutils [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.676 2 INFO nova.scheduler.client.report [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Deleted allocations for instance 42651a9c-7b98-4ad0-bf9d-430330b33968#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.729 2 DEBUG oslo_concurrency.lockutils [None req-14e622dd-9d08-4615-810d-e18754df31f3 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.997 2 DEBUG nova.compute.manager [req-309ad5f2-c05f-4f82-899a-fb4f0b5c6ee8 req-fbefbc80-33b5-45c7-95c0-e347fd0b2cbd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Received event network-vif-plugged-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.997 2 DEBUG oslo_concurrency.lockutils [req-309ad5f2-c05f-4f82-899a-fb4f0b5c6ee8 req-fbefbc80-33b5-45c7-95c0-e347fd0b2cbd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.998 2 DEBUG oslo_concurrency.lockutils [req-309ad5f2-c05f-4f82-899a-fb4f0b5c6ee8 req-fbefbc80-33b5-45c7-95c0-e347fd0b2cbd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.998 2 DEBUG oslo_concurrency.lockutils [req-309ad5f2-c05f-4f82-899a-fb4f0b5c6ee8 req-fbefbc80-33b5-45c7-95c0-e347fd0b2cbd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "42651a9c-7b98-4ad0-bf9d-430330b33968-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:01:37 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.999 2 DEBUG nova.compute.manager [req-309ad5f2-c05f-4f82-899a-fb4f0b5c6ee8 req-fbefbc80-33b5-45c7-95c0-e347fd0b2cbd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] No waiting events found dispatching network-vif-plugged-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:01:38 np0005480824 nova_compute[260089]: 2025-10-11 04:01:37.999 2 WARNING nova.compute.manager [req-309ad5f2-c05f-4f82-899a-fb4f0b5c6ee8 req-fbefbc80-33b5-45c7-95c0-e347fd0b2cbd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Received unexpected event network-vif-plugged-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 00:01:38 np0005480824 nova_compute[260089]: 2025-10-11 04:01:38.000 2 DEBUG nova.compute.manager [req-309ad5f2-c05f-4f82-899a-fb4f0b5c6ee8 req-fbefbc80-33b5-45c7-95c0-e347fd0b2cbd 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Received event network-vif-deleted-189ca3df-84bb-4d92-8c0e-5ded5a7cdaa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:01:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:01:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4248237371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0028901340797356256 of space, bias 1.0, pg target 0.8670402239206877 quantized to 32 (current 32)
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 00:01:38 np0005480824 nova_compute[260089]: 2025-10-11 04:01:38.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:01:38 np0005480824 nova_compute[260089]: 2025-10-11 04:01:38.317 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:01:38 np0005480824 nova_compute[260089]: 2025-10-11 04:01:38.317 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:01:38 np0005480824 nova_compute[260089]: 2025-10-11 04:01:38.318 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:01:38 np0005480824 nova_compute[260089]: 2025-10-11 04:01:38.318 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 00:01:38 np0005480824 nova_compute[260089]: 2025-10-11 04:01:38.319 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:01:38 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:01:38 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2835898138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:01:38 np0005480824 nova_compute[260089]: 2025-10-11 04:01:38.766 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:01:38 np0005480824 podman[298476]: 2025-10-11 04:01:38.886540741 +0000 UTC m=+0.066367747 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct 11 00:01:38 np0005480824 podman[298475]: 2025-10-11 04:01:38.912009898 +0000 UTC m=+0.084310281 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251009)
Oct 11 00:01:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 270 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 276 KiB/s rd, 5.0 KiB/s wr, 39 op/s
Oct 11 00:01:38 np0005480824 nova_compute[260089]: 2025-10-11 04:01:38.982 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:01:38 np0005480824 nova_compute[260089]: 2025-10-11 04:01:38.983 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4357MB free_disk=59.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 00:01:38 np0005480824 nova_compute[260089]: 2025-10-11 04:01:38.983 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:01:38 np0005480824 nova_compute[260089]: 2025-10-11 04:01:38.983 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:01:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Oct 11 00:01:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Oct 11 00:01:39 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Oct 11 00:01:39 np0005480824 nova_compute[260089]: 2025-10-11 04:01:39.049 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 00:01:39 np0005480824 nova_compute[260089]: 2025-10-11 04:01:39.050 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 00:01:39 np0005480824 nova_compute[260089]: 2025-10-11 04:01:39.084 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:01:39 np0005480824 nova_compute[260089]: 2025-10-11 04:01:39.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:01:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2659260019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:01:39 np0005480824 nova_compute[260089]: 2025-10-11 04:01:39.528 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:01:39 np0005480824 nova_compute[260089]: 2025-10-11 04:01:39.537 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:01:39 np0005480824 nova_compute[260089]: 2025-10-11 04:01:39.555 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:01:39 np0005480824 nova_compute[260089]: 2025-10-11 04:01:39.578 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 00:01:39 np0005480824 nova_compute[260089]: 2025-10-11 04:01:39.579 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/728006950' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/728006950' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:01:40 np0005480824 nova_compute[260089]: 2025-10-11 04:01:40.580 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:01:40 np0005480824 nova_compute[260089]: 2025-10-11 04:01:40.580 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.669213) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155300669237, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 519, "num_deletes": 251, "total_data_size": 440932, "memory_usage": 450128, "flush_reason": "Manual Compaction"}
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155300672721, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 356532, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34054, "largest_seqno": 34572, "table_properties": {"data_size": 353753, "index_size": 751, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7443, "raw_average_key_size": 20, "raw_value_size": 348089, "raw_average_value_size": 969, "num_data_blocks": 33, "num_entries": 359, "num_filter_entries": 359, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155276, "oldest_key_time": 1760155276, "file_creation_time": 1760155300, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 3535 microseconds, and 1447 cpu microseconds.
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.672748) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 356532 bytes OK
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.672761) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.674330) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.674341) EVENT_LOG_v1 {"time_micros": 1760155300674337, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.674354) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 437924, prev total WAL file size 437924, number of live WAL files 2.
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.674738) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303031' seq:72057594037927935, type:22 .. '6D6772737461740031323532' seq:0, type:0; will stop at (end)
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(348KB)], [68(11MB)]
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155300674781, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 12480289, "oldest_snapshot_seqno": -1}
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6454 keys, 9265483 bytes, temperature: kUnknown
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155300743241, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9265483, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9218366, "index_size": 29855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 162601, "raw_average_key_size": 25, "raw_value_size": 9098377, "raw_average_value_size": 1409, "num_data_blocks": 1196, "num_entries": 6454, "num_filter_entries": 6454, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760155300, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.743674) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9265483 bytes
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.745485) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.0 rd, 135.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.6 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(61.0) write-amplify(26.0) OK, records in: 6961, records dropped: 507 output_compression: NoCompression
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.745517) EVENT_LOG_v1 {"time_micros": 1760155300745503, "job": 38, "event": "compaction_finished", "compaction_time_micros": 68574, "compaction_time_cpu_micros": 22567, "output_level": 6, "num_output_files": 1, "total_output_size": 9265483, "num_input_records": 6961, "num_output_records": 6454, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155300745816, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155300749631, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.674684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.749685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.749689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.749690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.749691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:01:40 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:01:40.749693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:01:40 np0005480824 nova_compute[260089]: 2025-10-11 04:01:40.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 270 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Oct 11 00:01:41 np0005480824 nova_compute[260089]: 2025-10-11 04:01:41.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:01:41 np0005480824 nova_compute[260089]: 2025-10-11 04:01:41.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 00:01:41 np0005480824 nova_compute[260089]: 2025-10-11 04:01:41.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 00:01:41 np0005480824 nova_compute[260089]: 2025-10-11 04:01:41.317 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 00:01:41 np0005480824 nova_compute[260089]: 2025-10-11 04:01:41.318 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:01:41 np0005480824 nova_compute[260089]: 2025-10-11 04:01:41.319 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:01:41 np0005480824 nova_compute[260089]: 2025-10-11 04:01:41.319 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 00:01:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:01:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2003056157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:01:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e438 do_prune osdmap full prune enabled
Oct 11 00:01:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e439 e439: 3 total, 3 up, 3 in
Oct 11 00:01:42 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e439: 3 total, 3 up, 3 in
Oct 11 00:01:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 88 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 387 KiB/s rd, 6.2 KiB/s wr, 113 op/s
Oct 11 00:01:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e439 do_prune osdmap full prune enabled
Oct 11 00:01:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e440 e440: 3 total, 3 up, 3 in
Oct 11 00:01:43 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e440: 3 total, 3 up, 3 in
Oct 11 00:01:44 np0005480824 nova_compute[260089]: 2025-10-11 04:01:44.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e440 do_prune osdmap full prune enabled
Oct 11 00:01:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e441 e441: 3 total, 3 up, 3 in
Oct 11 00:01:44 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e441: 3 total, 3 up, 3 in
Oct 11 00:01:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 88 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 5.3 KiB/s wr, 95 op/s
Oct 11 00:01:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e441 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:01:45 np0005480824 nova_compute[260089]: 2025-10-11 04:01:45.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:46 np0005480824 nova_compute[260089]: 2025-10-11 04:01:46.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:01:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:01:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3993751678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:01:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 88 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 5.3 KiB/s wr, 93 op/s
Oct 11 00:01:47 np0005480824 podman[298536]: 2025-10-11 04:01:47.047658225 +0000 UTC m=+0.103864181 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:01:47 np0005480824 nova_compute[260089]: 2025-10-11 04:01:47.293 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:01:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e441 do_prune osdmap full prune enabled
Oct 11 00:01:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e442 e442: 3 total, 3 up, 3 in
Oct 11 00:01:47 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e442: 3 total, 3 up, 3 in
Oct 11 00:01:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e442 do_prune osdmap full prune enabled
Oct 11 00:01:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e443 e443: 3 total, 3 up, 3 in
Oct 11 00:01:48 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e443: 3 total, 3 up, 3 in
Oct 11 00:01:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.8 KiB/s wr, 78 op/s
Oct 11 00:01:49 np0005480824 nova_compute[260089]: 2025-10-11 04:01:49.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:01:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e443 do_prune osdmap full prune enabled
Oct 11 00:01:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e444 e444: 3 total, 3 up, 3 in
Oct 11 00:01:50 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e444: 3 total, 3 up, 3 in
Oct 11 00:01:50 np0005480824 nova_compute[260089]: 2025-10-11 04:01:50.731 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155295.7294996, 42651a9c-7b98-4ad0-bf9d-430330b33968 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:01:50 np0005480824 nova_compute[260089]: 2025-10-11 04:01:50.732 2 INFO nova.compute.manager [-] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] VM Stopped (Lifecycle Event)#033[00m
Oct 11 00:01:50 np0005480824 nova_compute[260089]: 2025-10-11 04:01:50.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:50 np0005480824 nova_compute[260089]: 2025-10-11 04:01:50.764 2 DEBUG nova.compute.manager [None req-e3474610-e47b-428d-a951-2eb233ff74ab - - - - - -] [instance: 42651a9c-7b98-4ad0-bf9d-430330b33968] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:01:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.3 KiB/s wr, 69 op/s
Oct 11 00:01:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:01:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671498683' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:01:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:01:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671498683' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:01:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.5 KiB/s wr, 164 op/s
Oct 11 00:01:54 np0005480824 podman[298564]: 2025-10-11 04:01:54.03136676 +0000 UTC m=+0.082227013 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:01:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:01:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130646541' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:01:54 np0005480824 nova_compute[260089]: 2025-10-11 04:01:54.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.5 KiB/s wr, 92 op/s
Oct 11 00:01:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e444 do_prune osdmap full prune enabled
Oct 11 00:01:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e445 e445: 3 total, 3 up, 3 in
Oct 11 00:01:55 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e445: 3 total, 3 up, 3 in
Oct 11 00:01:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e445 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:01:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e445 do_prune osdmap full prune enabled
Oct 11 00:01:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e446 e446: 3 total, 3 up, 3 in
Oct 11 00:01:55 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e446: 3 total, 3 up, 3 in
Oct 11 00:01:55 np0005480824 nova_compute[260089]: 2025-10-11 04:01:55.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 46 KiB/s wr, 127 op/s
Oct 11 00:01:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:01:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:01:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:01:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:01:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:01:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:01:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 38 KiB/s wr, 117 op/s
Oct 11 00:01:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e446 do_prune osdmap full prune enabled
Oct 11 00:01:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e447 e447: 3 total, 3 up, 3 in
Oct 11 00:01:59 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e447: 3 total, 3 up, 3 in
Oct 11 00:01:59 np0005480824 nova_compute[260089]: 2025-10-11 04:01:59.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:01:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:01:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2214326820' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:01:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:01:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2214326820' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:02:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:02:00 np0005480824 nova_compute[260089]: 2025-10-11 04:02:00.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 46 KiB/s wr, 46 op/s
Oct 11 00:02:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:02:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1704557753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:02:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 88 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 36 KiB/s wr, 149 op/s
Oct 11 00:02:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e447 do_prune osdmap full prune enabled
Oct 11 00:02:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e448 e448: 3 total, 3 up, 3 in
Oct 11 00:02:03 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e448: 3 total, 3 up, 3 in
Oct 11 00:02:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e448 do_prune osdmap full prune enabled
Oct 11 00:02:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e449 e449: 3 total, 3 up, 3 in
Oct 11 00:02:04 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e449: 3 total, 3 up, 3 in
Oct 11 00:02:04 np0005480824 nova_compute[260089]: 2025-10-11 04:02:04.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 88 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 2.2 KiB/s wr, 150 op/s
Oct 11 00:02:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e449 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:02:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e449 do_prune osdmap full prune enabled
Oct 11 00:02:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e450 e450: 3 total, 3 up, 3 in
Oct 11 00:02:05 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e450: 3 total, 3 up, 3 in
Oct 11 00:02:05 np0005480824 nova_compute[260089]: 2025-10-11 04:02:05.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:02:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1006397989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:02:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:02:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1006397989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:02:06 np0005480824 ovn_controller[152667]: 2025-10-11T04:02:06Z|00238|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct 11 00:02:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 88 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 3.5 KiB/s wr, 169 op/s
Oct 11 00:02:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:02:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3203562199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:02:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 88 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 KiB/s wr, 74 op/s
Oct 11 00:02:09 np0005480824 podman[298583]: 2025-10-11 04:02:09.021837897 +0000 UTC m=+0.068535818 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 00:02:09 np0005480824 podman[298584]: 2025-10-11 04:02:09.044646613 +0000 UTC m=+0.084412224 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 00:02:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e450 do_prune osdmap full prune enabled
Oct 11 00:02:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e451 e451: 3 total, 3 up, 3 in
Oct 11 00:02:09 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e451: 3 total, 3 up, 3 in
Oct 11 00:02:09 np0005480824 nova_compute[260089]: 2025-10-11 04:02:09.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:10.507 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:10.507 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:10.508 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e451 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:02:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e451 do_prune osdmap full prune enabled
Oct 11 00:02:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e452 e452: 3 total, 3 up, 3 in
Oct 11 00:02:10 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e452: 3 total, 3 up, 3 in
Oct 11 00:02:10 np0005480824 nova_compute[260089]: 2025-10-11 04:02:10.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 88 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 KiB/s wr, 74 op/s
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:02:11 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev f8214e92-d738-4532-a748-9c7714a9466e does not exist
Oct 11 00:02:11 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 57f8b4c6-a817-47a1-8999-cb61eeb27037 does not exist
Oct 11 00:02:11 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 47b51b60-1602-4253-81bc-ae1c2043c900 does not exist
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e452 do_prune osdmap full prune enabled
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e453 e453: 3 total, 3 up, 3 in
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e453: 3 total, 3 up, 3 in
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:02:11 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:02:12 np0005480824 podman[298894]: 2025-10-11 04:02:12.014376604 +0000 UTC m=+0.047737789 container create 57f888522fb887bfd004a15c53c2ed49b8820a9058e30d89173695c47950ddf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:02:12 np0005480824 systemd[1]: Started libpod-conmon-57f888522fb887bfd004a15c53c2ed49b8820a9058e30d89173695c47950ddf3.scope.
Oct 11 00:02:12 np0005480824 podman[298894]: 2025-10-11 04:02:11.990508665 +0000 UTC m=+0.023869780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:02:12 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:02:12 np0005480824 podman[298894]: 2025-10-11 04:02:12.133709499 +0000 UTC m=+0.167070614 container init 57f888522fb887bfd004a15c53c2ed49b8820a9058e30d89173695c47950ddf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:02:12 np0005480824 podman[298894]: 2025-10-11 04:02:12.145733006 +0000 UTC m=+0.179094131 container start 57f888522fb887bfd004a15c53c2ed49b8820a9058e30d89173695c47950ddf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bohr, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:02:12 np0005480824 fervent_bohr[298910]: 167 167
Oct 11 00:02:12 np0005480824 systemd[1]: libpod-57f888522fb887bfd004a15c53c2ed49b8820a9058e30d89173695c47950ddf3.scope: Deactivated successfully.
Oct 11 00:02:12 np0005480824 podman[298894]: 2025-10-11 04:02:12.154116669 +0000 UTC m=+0.187477774 container attach 57f888522fb887bfd004a15c53c2ed49b8820a9058e30d89173695c47950ddf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bohr, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 11 00:02:12 np0005480824 podman[298894]: 2025-10-11 04:02:12.155576202 +0000 UTC m=+0.188937297 container died 57f888522fb887bfd004a15c53c2ed49b8820a9058e30d89173695c47950ddf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 11 00:02:12 np0005480824 systemd[1]: var-lib-containers-storage-overlay-21102787b3b7488c9c718286cc6a2350170d6798f563a27955b2eeb9126538fc-merged.mount: Deactivated successfully.
Oct 11 00:02:12 np0005480824 podman[298894]: 2025-10-11 04:02:12.222629094 +0000 UTC m=+0.255990189 container remove 57f888522fb887bfd004a15c53c2ed49b8820a9058e30d89173695c47950ddf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bohr, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:02:12 np0005480824 systemd[1]: libpod-conmon-57f888522fb887bfd004a15c53c2ed49b8820a9058e30d89173695c47950ddf3.scope: Deactivated successfully.
Oct 11 00:02:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:02:12 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4267242037' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:02:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:02:12 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4267242037' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:02:12 np0005480824 podman[298936]: 2025-10-11 04:02:12.399445283 +0000 UTC m=+0.044201529 container create eeca7ed77b8e557ffb3c2df445e0454c625272d1225a8b8a3ce2a184810c403b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:02:12 np0005480824 systemd[1]: Started libpod-conmon-eeca7ed77b8e557ffb3c2df445e0454c625272d1225a8b8a3ce2a184810c403b.scope.
Oct 11 00:02:12 np0005480824 podman[298936]: 2025-10-11 04:02:12.377824455 +0000 UTC m=+0.022580741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:02:12 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:02:12 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c811187893b46b0e4b07cec45efce991f2e7e59080bd72dd9efcf450175b22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:02:12 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c811187893b46b0e4b07cec45efce991f2e7e59080bd72dd9efcf450175b22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:02:12 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c811187893b46b0e4b07cec45efce991f2e7e59080bd72dd9efcf450175b22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:02:12 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c811187893b46b0e4b07cec45efce991f2e7e59080bd72dd9efcf450175b22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:02:12 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c811187893b46b0e4b07cec45efce991f2e7e59080bd72dd9efcf450175b22/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 00:02:12 np0005480824 podman[298936]: 2025-10-11 04:02:12.501226693 +0000 UTC m=+0.145982979 container init eeca7ed77b8e557ffb3c2df445e0454c625272d1225a8b8a3ce2a184810c403b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brahmagupta, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 11 00:02:12 np0005480824 podman[298936]: 2025-10-11 04:02:12.511249435 +0000 UTC m=+0.156005681 container start eeca7ed77b8e557ffb3c2df445e0454c625272d1225a8b8a3ce2a184810c403b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brahmagupta, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:02:12 np0005480824 podman[298936]: 2025-10-11 04:02:12.514817946 +0000 UTC m=+0.159574222 container attach eeca7ed77b8e557ffb3c2df445e0454c625272d1225a8b8a3ce2a184810c403b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brahmagupta, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 00:02:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e453 do_prune osdmap full prune enabled
Oct 11 00:02:12 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e454 e454: 3 total, 3 up, 3 in
Oct 11 00:02:12 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e454: 3 total, 3 up, 3 in
Oct 11 00:02:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 308 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 154 KiB/s rd, 28 MiB/s wr, 222 op/s
Oct 11 00:02:13 np0005480824 heuristic_brahmagupta[298952]: --> passed data devices: 0 physical, 3 LVM
Oct 11 00:02:13 np0005480824 heuristic_brahmagupta[298952]: --> relative data size: 1.0
Oct 11 00:02:13 np0005480824 heuristic_brahmagupta[298952]: --> All data devices are unavailable
Oct 11 00:02:13 np0005480824 systemd[1]: libpod-eeca7ed77b8e557ffb3c2df445e0454c625272d1225a8b8a3ce2a184810c403b.scope: Deactivated successfully.
Oct 11 00:02:13 np0005480824 podman[298936]: 2025-10-11 04:02:13.67334056 +0000 UTC m=+1.318096816 container died eeca7ed77b8e557ffb3c2df445e0454c625272d1225a8b8a3ce2a184810c403b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brahmagupta, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 00:02:13 np0005480824 systemd[1]: libpod-eeca7ed77b8e557ffb3c2df445e0454c625272d1225a8b8a3ce2a184810c403b.scope: Consumed 1.051s CPU time.
Oct 11 00:02:13 np0005480824 systemd[1]: var-lib-containers-storage-overlay-d1c811187893b46b0e4b07cec45efce991f2e7e59080bd72dd9efcf450175b22-merged.mount: Deactivated successfully.
Oct 11 00:02:13 np0005480824 podman[298936]: 2025-10-11 04:02:13.753760669 +0000 UTC m=+1.398516925 container remove eeca7ed77b8e557ffb3c2df445e0454c625272d1225a8b8a3ce2a184810c403b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 00:02:13 np0005480824 systemd[1]: libpod-conmon-eeca7ed77b8e557ffb3c2df445e0454c625272d1225a8b8a3ce2a184810c403b.scope: Deactivated successfully.
Oct 11 00:02:14 np0005480824 podman[299135]: 2025-10-11 04:02:14.409086076 +0000 UTC m=+0.043022291 container create 2bdc829ffdda8bca3089f0d44e3157787743dc979bafcb9f51116211c465d54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banach, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 00:02:14 np0005480824 systemd[1]: Started libpod-conmon-2bdc829ffdda8bca3089f0d44e3157787743dc979bafcb9f51116211c465d54a.scope.
Oct 11 00:02:14 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:02:14 np0005480824 podman[299135]: 2025-10-11 04:02:14.389059575 +0000 UTC m=+0.022995780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:02:14 np0005480824 podman[299135]: 2025-10-11 04:02:14.500943539 +0000 UTC m=+0.134879734 container init 2bdc829ffdda8bca3089f0d44e3157787743dc979bafcb9f51116211c465d54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 00:02:14 np0005480824 podman[299135]: 2025-10-11 04:02:14.512340091 +0000 UTC m=+0.146276296 container start 2bdc829ffdda8bca3089f0d44e3157787743dc979bafcb9f51116211c465d54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banach, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 00:02:14 np0005480824 podman[299135]: 2025-10-11 04:02:14.51534811 +0000 UTC m=+0.149284305 container attach 2bdc829ffdda8bca3089f0d44e3157787743dc979bafcb9f51116211c465d54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 00:02:14 np0005480824 nova_compute[260089]: 2025-10-11 04:02:14.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:14 np0005480824 sleepy_banach[299151]: 167 167
Oct 11 00:02:14 np0005480824 systemd[1]: libpod-2bdc829ffdda8bca3089f0d44e3157787743dc979bafcb9f51116211c465d54a.scope: Deactivated successfully.
Oct 11 00:02:14 np0005480824 podman[299135]: 2025-10-11 04:02:14.520726754 +0000 UTC m=+0.154662949 container died 2bdc829ffdda8bca3089f0d44e3157787743dc979bafcb9f51116211c465d54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banach, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 00:02:14 np0005480824 systemd[1]: var-lib-containers-storage-overlay-abc192181608713dcac70a99548d3352da8dc32cec6baa6001756b88319ef9fe-merged.mount: Deactivated successfully.
Oct 11 00:02:14 np0005480824 podman[299135]: 2025-10-11 04:02:14.560220023 +0000 UTC m=+0.194156228 container remove 2bdc829ffdda8bca3089f0d44e3157787743dc979bafcb9f51116211c465d54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banach, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:02:14 np0005480824 nova_compute[260089]: 2025-10-11 04:02:14.564 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "00f85c39-8b23-4556-9fbf-d806c690135c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:14 np0005480824 nova_compute[260089]: 2025-10-11 04:02:14.565 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:14 np0005480824 systemd[1]: libpod-conmon-2bdc829ffdda8bca3089f0d44e3157787743dc979bafcb9f51116211c465d54a.scope: Deactivated successfully.
Oct 11 00:02:14 np0005480824 nova_compute[260089]: 2025-10-11 04:02:14.578 2 DEBUG nova.compute.manager [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 00:02:14 np0005480824 nova_compute[260089]: 2025-10-11 04:02:14.666 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:14 np0005480824 nova_compute[260089]: 2025-10-11 04:02:14.666 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:14 np0005480824 nova_compute[260089]: 2025-10-11 04:02:14.674 2 DEBUG nova.virt.hardware [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 00:02:14 np0005480824 nova_compute[260089]: 2025-10-11 04:02:14.675 2 INFO nova.compute.claims [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 00:02:14 np0005480824 podman[299174]: 2025-10-11 04:02:14.730795137 +0000 UTC m=+0.040211286 container create 8985dfd8a0661c1696d846871dfce5e6304779c9189a283fbf221bcd3aee8267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:02:14 np0005480824 nova_compute[260089]: 2025-10-11 04:02:14.769 2 DEBUG oslo_concurrency.processutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:14 np0005480824 systemd[1]: Started libpod-conmon-8985dfd8a0661c1696d846871dfce5e6304779c9189a283fbf221bcd3aee8267.scope.
Oct 11 00:02:14 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:02:14 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb84ed39d625a7d974df2ae75b768fafb866feed538f53ea842d580e1f70268c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:02:14 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb84ed39d625a7d974df2ae75b768fafb866feed538f53ea842d580e1f70268c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:02:14 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb84ed39d625a7d974df2ae75b768fafb866feed538f53ea842d580e1f70268c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:02:14 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb84ed39d625a7d974df2ae75b768fafb866feed538f53ea842d580e1f70268c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:02:14 np0005480824 podman[299174]: 2025-10-11 04:02:14.712556978 +0000 UTC m=+0.021973127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:02:14 np0005480824 podman[299174]: 2025-10-11 04:02:14.821056543 +0000 UTC m=+0.130472692 container init 8985dfd8a0661c1696d846871dfce5e6304779c9189a283fbf221bcd3aee8267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_thompson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 00:02:14 np0005480824 podman[299174]: 2025-10-11 04:02:14.829217801 +0000 UTC m=+0.138633940 container start 8985dfd8a0661c1696d846871dfce5e6304779c9189a283fbf221bcd3aee8267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_thompson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 00:02:14 np0005480824 podman[299174]: 2025-10-11 04:02:14.837619854 +0000 UTC m=+0.147036033 container attach 8985dfd8a0661c1696d846871dfce5e6304779c9189a283fbf221bcd3aee8267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_thompson, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 00:02:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e454 do_prune osdmap full prune enabled
Oct 11 00:02:14 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e455 e455: 3 total, 3 up, 3 in
Oct 11 00:02:14 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e455: 3 total, 3 up, 3 in
Oct 11 00:02:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 308 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 144 KiB/s rd, 26 MiB/s wr, 207 op/s
Oct 11 00:02:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:02:15 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365890380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.206 2 DEBUG oslo_concurrency.processutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.216 2 DEBUG nova.compute.provider_tree [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.233 2 DEBUG nova.scheduler.client.report [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.254 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.254 2 DEBUG nova.compute.manager [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.306 2 DEBUG nova.compute.manager [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.307 2 DEBUG nova.network.neutron [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.328 2 INFO nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.350 2 DEBUG nova.compute.manager [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.406 2 INFO nova.virt.block_device [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Booting with volume add0a15d-17c4-4d18-981c-95d26fc9243b at /dev/vda#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.533 2 DEBUG nova.policy [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eccc3f574d354840901d28dad2488bf4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0e73ded2f2ee46b4a7485c01ef1b73e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.556 2 DEBUG os_brick.utils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.557 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.572 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.573 676 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb5ff39-c3d2-4056-9677-796410325d28]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.576 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.586 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.586 676 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f29c6c-f7f6-48ac-81d3-b9bbeca2ccdf]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.587 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.597 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.597 676 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ba38e8-e00d-47dd-8eb4-977ddfa9ee67]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.598 676 DEBUG oslo.privsep.daemon [-] privsep: reply[8046a826-742d-4cd4-ac3f-c96c26cbf211]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.599 2 DEBUG oslo_concurrency.processutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]: {
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:    "0": [
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:        {
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "devices": [
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "/dev/loop3"
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            ],
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_name": "ceph_lv0",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_size": "21470642176",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "name": "ceph_lv0",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "tags": {
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.cluster_name": "ceph",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.crush_device_class": "",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.encrypted": "0",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.osd_id": "0",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.type": "block",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.vdo": "0"
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            },
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "type": "block",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "vg_name": "ceph_vg0"
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:        }
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:    ],
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:    "1": [
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:        {
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "devices": [
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "/dev/loop4"
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            ],
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_name": "ceph_lv1",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_size": "21470642176",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "name": "ceph_lv1",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "tags": {
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.cluster_name": "ceph",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.crush_device_class": "",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.encrypted": "0",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.osd_id": "1",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.type": "block",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.vdo": "0"
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            },
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "type": "block",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "vg_name": "ceph_vg1"
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:        }
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:    ],
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:    "2": [
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:        {
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "devices": [
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "/dev/loop5"
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            ],
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_name": "ceph_lv2",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_size": "21470642176",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "name": "ceph_lv2",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "tags": {
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.cluster_name": "ceph",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.crush_device_class": "",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.encrypted": "0",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.osd_id": "2",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.type": "block",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:                "ceph.vdo": "0"
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            },
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "type": "block",
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:            "vg_name": "ceph_vg2"
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:        }
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]:    ]
Oct 11 00:02:15 np0005480824 sharp_thompson[299191]: }
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.622 2 DEBUG oslo_concurrency.processutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.625 2 DEBUG os_brick.initiator.connectors.lightos [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.625 2 DEBUG os_brick.initiator.connectors.lightos [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.625 2 DEBUG os_brick.initiator.connectors.lightos [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.626 2 DEBUG os_brick.utils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.626 2 DEBUG nova.virt.block_device [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Updating existing volume attachment record: 86d94988-63ec-488c-9c8b-a438711d75ab _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 11 00:02:15 np0005480824 systemd[1]: libpod-8985dfd8a0661c1696d846871dfce5e6304779c9189a283fbf221bcd3aee8267.scope: Deactivated successfully.
Oct 11 00:02:15 np0005480824 podman[299174]: 2025-10-11 04:02:15.633807572 +0000 UTC m=+0.943223711 container died 8985dfd8a0661c1696d846871dfce5e6304779c9189a283fbf221bcd3aee8267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:02:15 np0005480824 systemd[1]: var-lib-containers-storage-overlay-eb84ed39d625a7d974df2ae75b768fafb866feed538f53ea842d580e1f70268c-merged.mount: Deactivated successfully.
Oct 11 00:02:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e455 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:02:15 np0005480824 podman[299174]: 2025-10-11 04:02:15.68895457 +0000 UTC m=+0.998370719 container remove 8985dfd8a0661c1696d846871dfce5e6304779c9189a283fbf221bcd3aee8267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_thompson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:02:15 np0005480824 systemd[1]: libpod-conmon-8985dfd8a0661c1696d846871dfce5e6304779c9189a283fbf221bcd3aee8267.scope: Deactivated successfully.
Oct 11 00:02:15 np0005480824 nova_compute[260089]: 2025-10-11 04:02:15.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:16 np0005480824 podman[299380]: 2025-10-11 04:02:16.397425069 +0000 UTC m=+0.047598786 container create d067f73970bd652c315fa6f94a7d40e396fdcbba4b9273f11cf73762bae95de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 00:02:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:02:16 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/907609713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:02:16 np0005480824 systemd[1]: Started libpod-conmon-d067f73970bd652c315fa6f94a7d40e396fdcbba4b9273f11cf73762bae95de5.scope.
Oct 11 00:02:16 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:02:16 np0005480824 podman[299380]: 2025-10-11 04:02:16.38309089 +0000 UTC m=+0.033264627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:02:16 np0005480824 podman[299380]: 2025-10-11 04:02:16.478528945 +0000 UTC m=+0.128702732 container init d067f73970bd652c315fa6f94a7d40e396fdcbba4b9273f11cf73762bae95de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 00:02:16 np0005480824 podman[299380]: 2025-10-11 04:02:16.484343389 +0000 UTC m=+0.134517146 container start d067f73970bd652c315fa6f94a7d40e396fdcbba4b9273f11cf73762bae95de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:02:16 np0005480824 podman[299380]: 2025-10-11 04:02:16.488006373 +0000 UTC m=+0.138180180 container attach d067f73970bd652c315fa6f94a7d40e396fdcbba4b9273f11cf73762bae95de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 00:02:16 np0005480824 nova_compute[260089]: 2025-10-11 04:02:16.488 2 DEBUG nova.network.neutron [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Successfully created port: ec74708b-8329-48e4-b5f9-09af33e086f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 00:02:16 np0005480824 gracious_colden[299397]: 167 167
Oct 11 00:02:16 np0005480824 systemd[1]: libpod-d067f73970bd652c315fa6f94a7d40e396fdcbba4b9273f11cf73762bae95de5.scope: Deactivated successfully.
Oct 11 00:02:16 np0005480824 podman[299380]: 2025-10-11 04:02:16.491252678 +0000 UTC m=+0.141426425 container died d067f73970bd652c315fa6f94a7d40e396fdcbba4b9273f11cf73762bae95de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 00:02:16 np0005480824 systemd[1]: var-lib-containers-storage-overlay-2104df80c0fb3bbd2c46972d604ad410c2ce1e292b065fff9c8d55ad399dfa31-merged.mount: Deactivated successfully.
Oct 11 00:02:16 np0005480824 podman[299380]: 2025-10-11 04:02:16.529302913 +0000 UTC m=+0.179476630 container remove d067f73970bd652c315fa6f94a7d40e396fdcbba4b9273f11cf73762bae95de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 00:02:16 np0005480824 systemd[1]: libpod-conmon-d067f73970bd652c315fa6f94a7d40e396fdcbba4b9273f11cf73762bae95de5.scope: Deactivated successfully.
Oct 11 00:02:16 np0005480824 podman[299419]: 2025-10-11 04:02:16.72085846 +0000 UTC m=+0.051711161 container create d6bc1cebb406200cb1c889f4cac6f6d2656422e37f135bc15f755ecaecf410ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamport, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 00:02:16 np0005480824 nova_compute[260089]: 2025-10-11 04:02:16.734 2 DEBUG nova.compute.manager [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 00:02:16 np0005480824 nova_compute[260089]: 2025-10-11 04:02:16.737 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 00:02:16 np0005480824 nova_compute[260089]: 2025-10-11 04:02:16.738 2 INFO nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Creating image(s)#033[00m
Oct 11 00:02:16 np0005480824 nova_compute[260089]: 2025-10-11 04:02:16.739 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 11 00:02:16 np0005480824 nova_compute[260089]: 2025-10-11 04:02:16.740 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Ensure instance console log exists: /var/lib/nova/instances/00f85c39-8b23-4556-9fbf-d806c690135c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 00:02:16 np0005480824 nova_compute[260089]: 2025-10-11 04:02:16.741 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:16 np0005480824 nova_compute[260089]: 2025-10-11 04:02:16.743 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:16 np0005480824 nova_compute[260089]: 2025-10-11 04:02:16.744 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:16 np0005480824 systemd[1]: Started libpod-conmon-d6bc1cebb406200cb1c889f4cac6f6d2656422e37f135bc15f755ecaecf410ef.scope.
Oct 11 00:02:16 np0005480824 podman[299419]: 2025-10-11 04:02:16.698527986 +0000 UTC m=+0.029380717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:02:16 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:02:16 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1aa6be53c4ed47cdc41ebe56847956ec9046e0c87d39cb879eb8f79e200653a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:02:16 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1aa6be53c4ed47cdc41ebe56847956ec9046e0c87d39cb879eb8f79e200653a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:02:16 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1aa6be53c4ed47cdc41ebe56847956ec9046e0c87d39cb879eb8f79e200653a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:02:16 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1aa6be53c4ed47cdc41ebe56847956ec9046e0c87d39cb879eb8f79e200653a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:02:16 np0005480824 podman[299419]: 2025-10-11 04:02:16.825130819 +0000 UTC m=+0.155983550 container init d6bc1cebb406200cb1c889f4cac6f6d2656422e37f135bc15f755ecaecf410ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamport, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:02:16 np0005480824 podman[299419]: 2025-10-11 04:02:16.83299209 +0000 UTC m=+0.163844781 container start d6bc1cebb406200cb1c889f4cac6f6d2656422e37f135bc15f755ecaecf410ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamport, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:02:16 np0005480824 podman[299419]: 2025-10-11 04:02:16.841134157 +0000 UTC m=+0.171986928 container attach d6bc1cebb406200cb1c889f4cac6f6d2656422e37f135bc15f755ecaecf410ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:02:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e455 do_prune osdmap full prune enabled
Oct 11 00:02:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e456 e456: 3 total, 3 up, 3 in
Oct 11 00:02:16 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e456: 3 total, 3 up, 3 in
Oct 11 00:02:16 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 317 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 148 KiB/s rd, 22 MiB/s wr, 212 op/s
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.093 2 DEBUG nova.network.neutron [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Successfully updated port: ec74708b-8329-48e4-b5f9-09af33e086f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.111 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "refresh_cache-00f85c39-8b23-4556-9fbf-d806c690135c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.112 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquired lock "refresh_cache-00f85c39-8b23-4556-9fbf-d806c690135c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.113 2 DEBUG nova.network.neutron [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.202 2 DEBUG nova.compute.manager [req-83e02d4e-9296-4991-bdef-fab94c7cd4d8 req-83e9ac43-0230-4a70-87ab-3c0b2a92398d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Received event network-changed-ec74708b-8329-48e4-b5f9-09af33e086f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.203 2 DEBUG nova.compute.manager [req-83e02d4e-9296-4991-bdef-fab94c7cd4d8 req-83e9ac43-0230-4a70-87ab-3c0b2a92398d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Refreshing instance network info cache due to event network-changed-ec74708b-8329-48e4-b5f9-09af33e086f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.204 2 DEBUG oslo_concurrency.lockutils [req-83e02d4e-9296-4991-bdef-fab94c7cd4d8 req-83e9ac43-0230-4a70-87ab-3c0b2a92398d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-00f85c39-8b23-4556-9fbf-d806c690135c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.308 2 DEBUG nova.network.neutron [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]: {
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "osd_id": 0,
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "type": "bluestore"
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:    },
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "osd_id": 1,
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "type": "bluestore"
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:    },
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "osd_id": 2,
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:        "type": "bluestore"
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]:    }
Oct 11 00:02:17 np0005480824 hopeful_lamport[299435]: }
Oct 11 00:02:17 np0005480824 systemd[1]: libpod-d6bc1cebb406200cb1c889f4cac6f6d2656422e37f135bc15f755ecaecf410ef.scope: Deactivated successfully.
Oct 11 00:02:17 np0005480824 podman[299419]: 2025-10-11 04:02:17.856919697 +0000 UTC m=+1.187772388 container died d6bc1cebb406200cb1c889f4cac6f6d2656422e37f135bc15f755ecaecf410ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:02:17 np0005480824 systemd[1]: libpod-d6bc1cebb406200cb1c889f4cac6f6d2656422e37f135bc15f755ecaecf410ef.scope: Consumed 1.019s CPU time.
Oct 11 00:02:17 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c1aa6be53c4ed47cdc41ebe56847956ec9046e0c87d39cb879eb8f79e200653a-merged.mount: Deactivated successfully.
Oct 11 00:02:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e456 do_prune osdmap full prune enabled
Oct 11 00:02:17 np0005480824 podman[299419]: 2025-10-11 04:02:17.921334278 +0000 UTC m=+1.252186979 container remove d6bc1cebb406200cb1c889f4cac6f6d2656422e37f135bc15f755ecaecf410ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamport, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 00:02:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e457 e457: 3 total, 3 up, 3 in
Oct 11 00:02:17 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e457: 3 total, 3 up, 3 in
Oct 11 00:02:17 np0005480824 systemd[1]: libpod-conmon-d6bc1cebb406200cb1c889f4cac6f6d2656422e37f135bc15f755ecaecf410ef.scope: Deactivated successfully.
Oct 11 00:02:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.957 2 DEBUG nova.network.neutron [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Updating instance_info_cache with network_info: [{"id": "ec74708b-8329-48e4-b5f9-09af33e086f9", "address": "fa:16:3e:71:61:be", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec74708b-83", "ovs_interfaceid": "ec74708b-8329-48e4-b5f9-09af33e086f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:02:17 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:02:17 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 00:02:17 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.982 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Releasing lock "refresh_cache-00f85c39-8b23-4556-9fbf-d806c690135c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.983 2 DEBUG nova.compute.manager [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Instance network_info: |[{"id": "ec74708b-8329-48e4-b5f9-09af33e086f9", "address": "fa:16:3e:71:61:be", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec74708b-83", "ovs_interfaceid": "ec74708b-8329-48e4-b5f9-09af33e086f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.984 2 DEBUG oslo_concurrency.lockutils [req-83e02d4e-9296-4991-bdef-fab94c7cd4d8 req-83e9ac43-0230-4a70-87ab-3c0b2a92398d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-00f85c39-8b23-4556-9fbf-d806c690135c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.984 2 DEBUG nova.network.neutron [req-83e02d4e-9296-4991-bdef-fab94c7cd4d8 req-83e9ac43-0230-4a70-87ab-3c0b2a92398d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Refreshing network info cache for port ec74708b-8329-48e4-b5f9-09af33e086f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 00:02:17 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 51fd3642-f625-40ec-b7d1-baf0a98c4db5 does not exist
Oct 11 00:02:17 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev efe864dd-8fa0-4a58-a1d4-afb747f9e43a does not exist
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.990 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Start _get_guest_xml network_info=[{"id": "ec74708b-8329-48e4-b5f9-09af33e086f9", "address": "fa:16:3e:71:61:be", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec74708b-83", "ovs_interfaceid": "ec74708b-8329-48e4-b5f9-09af33e086f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '86d94988-63ec-488c-9c8b-a438711d75ab', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-add0a15d-17c4-4d18-981c-95d26fc9243b', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'add0a15d-17c4-4d18-981c-95d26fc9243b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '00f85c39-8b23-4556-9fbf-d806c690135c', 'attached_at': '', 'detached_at': '', 'volume_id': 'add0a15d-17c4-4d18-981c-95d26fc9243b', 'serial': 'add0a15d-17c4-4d18-981c-95d26fc9243b'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 00:02:17 np0005480824 nova_compute[260089]: 2025-10-11 04:02:17.998 2 WARNING nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.004 2 DEBUG nova.virt.libvirt.host [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 00:02:18 np0005480824 podman[299468]: 2025-10-11 04:02:18.005885423 +0000 UTC m=+0.118716142 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller)
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.004 2 DEBUG nova.virt.libvirt.host [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.016 2 DEBUG nova.virt.libvirt.host [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.017 2 DEBUG nova.virt.libvirt.host [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.018 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.018 2 DEBUG nova.virt.hardware [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.018 2 DEBUG nova.virt.hardware [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.019 2 DEBUG nova.virt.hardware [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.019 2 DEBUG nova.virt.hardware [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.019 2 DEBUG nova.virt.hardware [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.020 2 DEBUG nova.virt.hardware [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.020 2 DEBUG nova.virt.hardware [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.020 2 DEBUG nova.virt.hardware [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.021 2 DEBUG nova.virt.hardware [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.021 2 DEBUG nova.virt.hardware [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.021 2 DEBUG nova.virt.hardware [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.051 2 DEBUG nova.storage.rbd_utils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] rbd image 00f85c39-8b23-4556-9fbf-d806c690135c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.058 2 DEBUG oslo_concurrency.processutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:02:18 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3185820982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.485 2 DEBUG oslo_concurrency.processutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.606 2 DEBUG os_brick.encryptors [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Using volume encryption metadata '{'encryption_key_id': '21378181-dcbc-49eb-92af-642bc5ab1b97', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-add0a15d-17c4-4d18-981c-95d26fc9243b', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'add0a15d-17c4-4d18-981c-95d26fc9243b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '00f85c39-8b23-4556-9fbf-d806c690135c', 'attached_at': '', 'detached_at': '', 'volume_id': 'add0a15d-17c4-4d18-981c-95d26fc9243b', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.609 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.627 2 DEBUG barbicanclient.v1.secrets [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/21378181-dcbc-49eb-92af-642bc5ab1b97 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.628 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.649 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.649 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.675 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.676 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.700 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.701 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.729 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.730 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.758 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.759 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.790 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.791 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.815 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.816 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.840 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.842 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.862 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.863 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.887 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.888 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.908 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.908 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.926 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.927 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:02:18 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.943 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.943 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.965 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.966 2 INFO barbicanclient.base [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/21378181-dcbc-49eb-92af-642bc5ab1b97#033[00m
Oct 11 00:02:18 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 3.8 KiB/s wr, 118 op/s
Oct 11 00:02:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e457 do_prune osdmap full prune enabled
Oct 11 00:02:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e458 e458: 3 total, 3 up, 3 in
Oct 11 00:02:18 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e458: 3 total, 3 up, 3 in
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.993 2 DEBUG barbicanclient.client [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:18 np0005480824 nova_compute[260089]: 2025-10-11 04:02:18.994 2 DEBUG nova.virt.libvirt.host [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 00:02:18 np0005480824 nova_compute[260089]:  <usage type="volume">
Oct 11 00:02:18 np0005480824 nova_compute[260089]:    <volume>add0a15d-17c4-4d18-981c-95d26fc9243b</volume>
Oct 11 00:02:18 np0005480824 nova_compute[260089]:  </usage>
Oct 11 00:02:18 np0005480824 nova_compute[260089]: </secret>
Oct 11 00:02:18 np0005480824 nova_compute[260089]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.040 2 DEBUG nova.virt.libvirt.vif [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:02:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-513759231',display_name='tempest-TransferEncryptedVolumeTest-server-513759231',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-513759231',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDBxc/yNqNR+6hcns3uIK5nByp3y7/Z4QylmLciPhq6XKUnS3cE8WBipiedmC1KXbIrzQin+vEhjglj/GGa46YEcBJkij9tDpZ0nSurHoQgQFYWBhIhwD65l+TbXzKNAAg==',key_name='tempest-TransferEncryptedVolumeTest-1457663695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e73ded2f2ee46b4a7485c01ef1b73e9',ramdisk_id='',reservation_id='r-1rv0mwa2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1815435088',owner_user_name='tempest-TransferEncryptedVolumeTest-1815435088-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:02:15Z,user_data=None,user_id='eccc3f574d354840901d28dad2488bf4',uuid=00f85c39-8b23-4556-9fbf-d806c690135c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec74708b-8329-48e4-b5f9-09af33e086f9", "address": "fa:16:3e:71:61:be", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec74708b-83", "ovs_interfaceid": "ec74708b-8329-48e4-b5f9-09af33e086f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.041 2 DEBUG nova.network.os_vif_util [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converting VIF {"id": "ec74708b-8329-48e4-b5f9-09af33e086f9", "address": "fa:16:3e:71:61:be", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec74708b-83", "ovs_interfaceid": "ec74708b-8329-48e4-b5f9-09af33e086f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.042 2 DEBUG nova.network.os_vif_util [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:61:be,bridge_name='br-int',has_traffic_filtering=True,id=ec74708b-8329-48e4-b5f9-09af33e086f9,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec74708b-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.044 2 DEBUG nova.objects.instance [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 00f85c39-8b23-4556-9fbf-d806c690135c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.061 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  <uuid>00f85c39-8b23-4556-9fbf-d806c690135c</uuid>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  <name>instance-0000001a</name>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  <metadata>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-513759231</nova:name>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 04:02:17</nova:creationTime>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:        <nova:user uuid="eccc3f574d354840901d28dad2488bf4">tempest-TransferEncryptedVolumeTest-1815435088-project-member</nova:user>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:        <nova:project uuid="0e73ded2f2ee46b4a7485c01ef1b73e9">tempest-TransferEncryptedVolumeTest-1815435088</nova:project>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:        <nova:port uuid="ec74708b-8329-48e4-b5f9-09af33e086f9">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:        </nova:port>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  </metadata>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <system>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <entry name="serial">00f85c39-8b23-4556-9fbf-d806c690135c</entry>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <entry name="uuid">00f85c39-8b23-4556-9fbf-d806c690135c</entry>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    </system>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  <os>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  </os>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  <features>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <acpi/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <apic/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  </features>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  </clock>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  </cpu>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  <devices>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/00f85c39-8b23-4556-9fbf-d806c690135c_disk.config">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      </source>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      </auth>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    </disk>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-add0a15d-17c4-4d18-981c-95d26fc9243b">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      </source>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      </auth>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <serial>add0a15d-17c4-4d18-981c-95d26fc9243b</serial>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <encryption format="luks">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:        <secret type="passphrase" uuid="db91fa97-1db0-41d3-8383-1bb5ea74c480"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      </encryption>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    </disk>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:71:61:be"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <target dev="tapec74708b-83"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    </interface>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/00f85c39-8b23-4556-9fbf-d806c690135c/console.log" append="off"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    </serial>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <video>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    </video>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    </rng>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 11 00:02:19 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:    </memballoon>
Oct 11 00:02:19 np0005480824 nova_compute[260089]:  </devices>
Oct 11 00:02:19 np0005480824 nova_compute[260089]: </domain>
Oct 11 00:02:19 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.065 2 DEBUG nova.compute.manager [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Preparing to wait for external event network-vif-plugged-ec74708b-8329-48e4-b5f9-09af33e086f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.066 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.066 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.067 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.068 2 DEBUG nova.virt.libvirt.vif [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:02:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-513759231',display_name='tempest-TransferEncryptedVolumeTest-server-513759231',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-513759231',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDBxc/yNqNR+6hcns3uIK5nByp3y7/Z4QylmLciPhq6XKUnS3cE8WBipiedmC1KXbIrzQin+vEhjglj/GGa46YEcBJkij9tDpZ0nSurHoQgQFYWBhIhwD65l+TbXzKNAAg==',key_name='tempest-TransferEncryptedVolumeTest-1457663695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e73ded2f2ee46b4a7485c01ef1b73e9',ramdisk_id='',reservation_id='r-1rv0mwa2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1815435088',owner_user_name='tempest-TransferEncryptedVolumeTest-1815435088-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:02:15Z,user_data=None,user_id='eccc3f574d354840901d28dad2488bf4',uuid=00f85c39-8b23-4556-9fbf-d806c690135c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec74708b-8329-48e4-b5f9-09af33e086f9", "address": "fa:16:3e:71:61:be", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec74708b-83", "ovs_interfaceid": "ec74708b-8329-48e4-b5f9-09af33e086f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.069 2 DEBUG nova.network.os_vif_util [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converting VIF {"id": "ec74708b-8329-48e4-b5f9-09af33e086f9", "address": "fa:16:3e:71:61:be", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec74708b-83", "ovs_interfaceid": "ec74708b-8329-48e4-b5f9-09af33e086f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.070 2 DEBUG nova.network.os_vif_util [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:61:be,bridge_name='br-int',has_traffic_filtering=True,id=ec74708b-8329-48e4-b5f9-09af33e086f9,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec74708b-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.071 2 DEBUG os_vif [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:61:be,bridge_name='br-int',has_traffic_filtering=True,id=ec74708b-8329-48e4-b5f9-09af33e086f9,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec74708b-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.073 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.074 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.081 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec74708b-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.082 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec74708b-83, col_values=(('external_ids', {'iface-id': 'ec74708b-8329-48e4-b5f9-09af33e086f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:61:be', 'vm-uuid': '00f85c39-8b23-4556-9fbf-d806c690135c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:19 np0005480824 NetworkManager[44969]: <info>  [1760155339.0875] manager: (tapec74708b-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.101 2 INFO os_vif [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:61:be,bridge_name='br-int',has_traffic_filtering=True,id=ec74708b-8329-48e4-b5f9-09af33e086f9,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec74708b-83')#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.111 2 DEBUG nova.network.neutron [req-83e02d4e-9296-4991-bdef-fab94c7cd4d8 req-83e9ac43-0230-4a70-87ab-3c0b2a92398d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Updated VIF entry in instance network info cache for port ec74708b-8329-48e4-b5f9-09af33e086f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.111 2 DEBUG nova.network.neutron [req-83e02d4e-9296-4991-bdef-fab94c7cd4d8 req-83e9ac43-0230-4a70-87ab-3c0b2a92398d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Updating instance_info_cache with network_info: [{"id": "ec74708b-8329-48e4-b5f9-09af33e086f9", "address": "fa:16:3e:71:61:be", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec74708b-83", "ovs_interfaceid": "ec74708b-8329-48e4-b5f9-09af33e086f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.133 2 DEBUG oslo_concurrency.lockutils [req-83e02d4e-9296-4991-bdef-fab94c7cd4d8 req-83e9ac43-0230-4a70-87ab-3c0b2a92398d 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-00f85c39-8b23-4556-9fbf-d806c690135c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.152 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.152 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.153 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] No VIF found with MAC fa:16:3e:71:61:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.153 2 INFO nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Using config drive#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.179 2 DEBUG nova.storage.rbd_utils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] rbd image 00f85c39-8b23-4556-9fbf-d806c690135c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:02:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:02:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3521747060' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:02:19 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:02:19 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3521747060' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.764 2 INFO nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Creating config drive at /var/lib/nova/instances/00f85c39-8b23-4556-9fbf-d806c690135c/disk.config#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.773 2 DEBUG oslo_concurrency.processutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/00f85c39-8b23-4556-9fbf-d806c690135c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg18k4fxu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.925 2 DEBUG oslo_concurrency.processutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/00f85c39-8b23-4556-9fbf-d806c690135c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg18k4fxu" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.959 2 DEBUG nova.storage.rbd_utils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] rbd image 00f85c39-8b23-4556-9fbf-d806c690135c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:02:19 np0005480824 nova_compute[260089]: 2025-10-11 04:02:19.963 2 DEBUG oslo_concurrency.processutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/00f85c39-8b23-4556-9fbf-d806c690135c/disk.config 00f85c39-8b23-4556-9fbf-d806c690135c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:20 np0005480824 nova_compute[260089]: 2025-10-11 04:02:20.130 2 DEBUG oslo_concurrency.processutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/00f85c39-8b23-4556-9fbf-d806c690135c/disk.config 00f85c39-8b23-4556-9fbf-d806c690135c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:20 np0005480824 nova_compute[260089]: 2025-10-11 04:02:20.131 2 INFO nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Deleting local config drive /var/lib/nova/instances/00f85c39-8b23-4556-9fbf-d806c690135c/disk.config because it was imported into RBD.#033[00m
Oct 11 00:02:20 np0005480824 kernel: tapec74708b-83: entered promiscuous mode
Oct 11 00:02:20 np0005480824 NetworkManager[44969]: <info>  [1760155340.1869] manager: (tapec74708b-83): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Oct 11 00:02:20 np0005480824 ovn_controller[152667]: 2025-10-11T04:02:20Z|00239|binding|INFO|Claiming lport ec74708b-8329-48e4-b5f9-09af33e086f9 for this chassis.
Oct 11 00:02:20 np0005480824 nova_compute[260089]: 2025-10-11 04:02:20.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:20 np0005480824 ovn_controller[152667]: 2025-10-11T04:02:20Z|00240|binding|INFO|ec74708b-8329-48e4-b5f9-09af33e086f9: Claiming fa:16:3e:71:61:be 10.100.0.8
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.197 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:61:be 10.100.0.8'], port_security=['fa:16:3e:71:61:be 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '00f85c39-8b23-4556-9fbf-d806c690135c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e73ded2f2ee46b4a7485c01ef1b73e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0a0daf4-5fac-406b-b8da-5df24a392041', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f608fb9-f693-4a11-9617-6172f3d025df, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=ec74708b-8329-48e4-b5f9-09af33e086f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.198 162245 INFO neutron.agent.ovn.metadata.agent [-] Port ec74708b-8329-48e4-b5f9-09af33e086f9 in datapath 15a62ee0-8e34-4e49-990e-246b4ef9e0c6 bound to our chassis#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.199 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15a62ee0-8e34-4e49-990e-246b4ef9e0c6#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.217 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab207ea-6a7b-4bc0-b100-b6a1326b2f85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.218 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap15a62ee0-81 in ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.221 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap15a62ee0-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.221 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[72480c07-6f62-431d-bc0c-1b4f8189b90b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.223 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ea8a1f49-c76a-494e-ac9d-a8dae5ac503e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 ovn_controller[152667]: 2025-10-11T04:02:20Z|00241|binding|INFO|Setting lport ec74708b-8329-48e4-b5f9-09af33e086f9 ovn-installed in OVS
Oct 11 00:02:20 np0005480824 ovn_controller[152667]: 2025-10-11T04:02:20Z|00242|binding|INFO|Setting lport ec74708b-8329-48e4-b5f9-09af33e086f9 up in Southbound
Oct 11 00:02:20 np0005480824 nova_compute[260089]: 2025-10-11 04:02:20.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:20 np0005480824 nova_compute[260089]: 2025-10-11 04:02:20.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.244 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[cee62080-c5d2-4199-8999-c9696e66000f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 systemd-machined[215071]: New machine qemu-26-instance-0000001a.
Oct 11 00:02:20 np0005480824 systemd[1]: Started Virtual Machine qemu-26-instance-0000001a.
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.275 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[6cdbbc0d-82b1-47ef-bd50-10583527f71d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 systemd-udevd[299670]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 00:02:20 np0005480824 NetworkManager[44969]: <info>  [1760155340.2968] device (tapec74708b-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 00:02:20 np0005480824 NetworkManager[44969]: <info>  [1760155340.3000] device (tapec74708b-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.315 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[8a24ca01-167b-431b-816e-3c5672ef9a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.321 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[45732c6f-e074-48b5-b13d-cdf9c51f3e10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 NetworkManager[44969]: <info>  [1760155340.3234] manager: (tap15a62ee0-80): new Veth device (/org/freedesktop/NetworkManager/Devices/130)
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.366 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[00d03584-52ce-4959-8ccb-453e8a1594e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.371 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ed6fb0-f45f-4a74-a600-13818cf1ad02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 NetworkManager[44969]: <info>  [1760155340.4054] device (tap15a62ee0-80): carrier: link connected
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.415 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[82772d8f-6cf0-47c1-939b-98a4292ebc60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.444 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[399c91ba-1145-48a2-9cfb-254aa7b25569]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15a62ee0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:91:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480806, 'reachable_time': 20309, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299700, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.471 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3bb56e-e527-498d-8695-2d4e2eea3e7f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:91d9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480806, 'tstamp': 480806}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299716, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.501 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ddceeae3-b2f9-4495-991f-714177430dc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15a62ee0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:91:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480806, 'reachable_time': 20309, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299720, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 nova_compute[260089]: 2025-10-11 04:02:20.543 2 DEBUG nova.compute.manager [req-498dbd73-4336-4473-a3eb-274ad7bcf157 req-04e500e5-6790-4795-84b3-212dcc450172 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Received event network-vif-plugged-ec74708b-8329-48e4-b5f9-09af33e086f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:02:20 np0005480824 nova_compute[260089]: 2025-10-11 04:02:20.544 2 DEBUG oslo_concurrency.lockutils [req-498dbd73-4336-4473-a3eb-274ad7bcf157 req-04e500e5-6790-4795-84b3-212dcc450172 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:20 np0005480824 nova_compute[260089]: 2025-10-11 04:02:20.544 2 DEBUG oslo_concurrency.lockutils [req-498dbd73-4336-4473-a3eb-274ad7bcf157 req-04e500e5-6790-4795-84b3-212dcc450172 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:20 np0005480824 nova_compute[260089]: 2025-10-11 04:02:20.545 2 DEBUG oslo_concurrency.lockutils [req-498dbd73-4336-4473-a3eb-274ad7bcf157 req-04e500e5-6790-4795-84b3-212dcc450172 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:20 np0005480824 nova_compute[260089]: 2025-10-11 04:02:20.545 2 DEBUG nova.compute.manager [req-498dbd73-4336-4473-a3eb-274ad7bcf157 req-04e500e5-6790-4795-84b3-212dcc450172 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Processing event network-vif-plugged-ec74708b-8329-48e4-b5f9-09af33e086f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.555 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[0416ed49-8d0f-400e-88d5-433890a00ff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.627 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb48076-e37f-4293-8c7e-e5be7a67e67d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.629 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15a62ee0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.629 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.630 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15a62ee0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:02:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e458 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:02:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e458 do_prune osdmap full prune enabled
Oct 11 00:02:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e459 e459: 3 total, 3 up, 3 in
Oct 11 00:02:20 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e459: 3 total, 3 up, 3 in
Oct 11 00:02:20 np0005480824 nova_compute[260089]: 2025-10-11 04:02:20.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:20 np0005480824 kernel: tap15a62ee0-80: entered promiscuous mode
Oct 11 00:02:20 np0005480824 NetworkManager[44969]: <info>  [1760155340.7021] manager: (tap15a62ee0-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Oct 11 00:02:20 np0005480824 nova_compute[260089]: 2025-10-11 04:02:20.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.703 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15a62ee0-80, col_values=(('external_ids', {'iface-id': '182275c4-a015-4f7a-8877-9961b2382f67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:02:20 np0005480824 nova_compute[260089]: 2025-10-11 04:02:20.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:20 np0005480824 ovn_controller[152667]: 2025-10-11T04:02:20Z|00243|binding|INFO|Releasing lport 182275c4-a015-4f7a-8877-9961b2382f67 from this chassis (sb_readonly=0)
Oct 11 00:02:20 np0005480824 nova_compute[260089]: 2025-10-11 04:02:20.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.732 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.733 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4d761330-43e6-4bdd-8956-2c039457fcc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.734 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: global
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-15a62ee0-8e34-4e49-990e-246b4ef9e0c6
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.pid.haproxy
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 15a62ee0-8e34-4e49-990e-246b4ef9e0c6
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 00:02:20 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:20.737 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'env', 'PROCESS_TAG=haproxy-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 00:02:20 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 2.7 KiB/s wr, 125 op/s
Oct 11 00:02:21 np0005480824 podman[299770]: 2025-10-11 04:02:21.177276174 +0000 UTC m=+0.062352275 container create c8b6edf67204e198fa45105bb48993f47c2000791bdb828e5f6e988d45b646b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:02:21 np0005480824 systemd[1]: Started libpod-conmon-c8b6edf67204e198fa45105bb48993f47c2000791bdb828e5f6e988d45b646b6.scope.
Oct 11 00:02:21 np0005480824 podman[299770]: 2025-10-11 04:02:21.149401402 +0000 UTC m=+0.034477543 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 00:02:21 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:02:21 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1b73024168e1dc657a293ca31d3b770ed29516602183c0e67c43363586fc683/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 00:02:21 np0005480824 podman[299770]: 2025-10-11 04:02:21.303062068 +0000 UTC m=+0.188138169 container init c8b6edf67204e198fa45105bb48993f47c2000791bdb828e5f6e988d45b646b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 00:02:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:21.307 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:02:21 np0005480824 nova_compute[260089]: 2025-10-11 04:02:21.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:21 np0005480824 podman[299770]: 2025-10-11 04:02:21.313741664 +0000 UTC m=+0.198817765 container start c8b6edf67204e198fa45105bb48993f47c2000791bdb828e5f6e988d45b646b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 00:02:21 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[299785]: [NOTICE]   (299790) : New worker (299792) forked
Oct 11 00:02:21 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[299785]: [NOTICE]   (299790) : Loading success.
Oct 11 00:02:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:21.393 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 00:02:22 np0005480824 nova_compute[260089]: 2025-10-11 04:02:22.708 2 DEBUG nova.compute.manager [req-b60ff348-37f9-415e-b3ee-84f7a4ae550e req-51046282-3b9c-4167-a765-853d1aee9f67 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Received event network-vif-plugged-ec74708b-8329-48e4-b5f9-09af33e086f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:02:22 np0005480824 nova_compute[260089]: 2025-10-11 04:02:22.709 2 DEBUG oslo_concurrency.lockutils [req-b60ff348-37f9-415e-b3ee-84f7a4ae550e req-51046282-3b9c-4167-a765-853d1aee9f67 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:22 np0005480824 nova_compute[260089]: 2025-10-11 04:02:22.710 2 DEBUG oslo_concurrency.lockutils [req-b60ff348-37f9-415e-b3ee-84f7a4ae550e req-51046282-3b9c-4167-a765-853d1aee9f67 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:22 np0005480824 nova_compute[260089]: 2025-10-11 04:02:22.711 2 DEBUG oslo_concurrency.lockutils [req-b60ff348-37f9-415e-b3ee-84f7a4ae550e req-51046282-3b9c-4167-a765-853d1aee9f67 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:22 np0005480824 nova_compute[260089]: 2025-10-11 04:02:22.711 2 DEBUG nova.compute.manager [req-b60ff348-37f9-415e-b3ee-84f7a4ae550e req-51046282-3b9c-4167-a765-853d1aee9f67 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] No waiting events found dispatching network-vif-plugged-ec74708b-8329-48e4-b5f9-09af33e086f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:02:22 np0005480824 nova_compute[260089]: 2025-10-11 04:02:22.712 2 WARNING nova.compute.manager [req-b60ff348-37f9-415e-b3ee-84f7a4ae550e req-51046282-3b9c-4167-a765-853d1aee9f67 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Received unexpected event network-vif-plugged-ec74708b-8329-48e4-b5f9-09af33e086f9 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 00:02:22 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 121 KiB/s rd, 29 KiB/s wr, 161 op/s
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.340 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155343.3388672, 00f85c39-8b23-4556-9fbf-d806c690135c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.340 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] VM Started (Lifecycle Event)#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.342 2 DEBUG nova.compute.manager [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.348 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.351 2 INFO nova.virt.libvirt.driver [-] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Instance spawned successfully.#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.351 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.379 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.385 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.388 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.388 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.388 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.389 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.389 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.389 2 DEBUG nova.virt.libvirt.driver [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.433 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.433 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155343.3395271, 00f85c39-8b23-4556-9fbf-d806c690135c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.434 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] VM Paused (Lifecycle Event)#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.452 2 INFO nova.compute.manager [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Took 6.72 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.452 2 DEBUG nova.compute.manager [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.501 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.506 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155343.3477428, 00f85c39-8b23-4556-9fbf-d806c690135c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.506 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] VM Resumed (Lifecycle Event)#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.648 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.664 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.674 2 INFO nova.compute.manager [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Took 9.05 seconds to build instance.#033[00m
Oct 11 00:02:23 np0005480824 nova_compute[260089]: 2025-10-11 04:02:23.695 2 DEBUG oslo_concurrency.lockutils [None req-31ade6c0-5613-48fe-9881-6ac270443510 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:24 np0005480824 nova_compute[260089]: 2025-10-11 04:02:24.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:24 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:24.395 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:02:24 np0005480824 nova_compute[260089]: 2025-10-11 04:02:24.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:02:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/786145711' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:02:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:02:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/786145711' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:02:24 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 23 KiB/s wr, 65 op/s
Oct 11 00:02:25 np0005480824 podman[299807]: 2025-10-11 04:02:25.011283508 +0000 UTC m=+0.057316160 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 00:02:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e459 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:02:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e459 do_prune osdmap full prune enabled
Oct 11 00:02:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e460 e460: 3 total, 3 up, 3 in
Oct 11 00:02:25 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e460: 3 total, 3 up, 3 in
Oct 11 00:02:26 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 993 KiB/s rd, 20 KiB/s wr, 89 op/s
Oct 11 00:02:27 np0005480824 nova_compute[260089]: 2025-10-11 04:02:27.639 2 DEBUG nova.compute.manager [req-64f1c8d6-5a4e-4f35-b3d7-8b70d8b55fc9 req-1cbff18e-31a5-4576-922c-bb398c2a99df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Received event network-changed-ec74708b-8329-48e4-b5f9-09af33e086f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:02:27 np0005480824 nova_compute[260089]: 2025-10-11 04:02:27.639 2 DEBUG nova.compute.manager [req-64f1c8d6-5a4e-4f35-b3d7-8b70d8b55fc9 req-1cbff18e-31a5-4576-922c-bb398c2a99df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Refreshing instance network info cache due to event network-changed-ec74708b-8329-48e4-b5f9-09af33e086f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 00:02:27 np0005480824 nova_compute[260089]: 2025-10-11 04:02:27.640 2 DEBUG oslo_concurrency.lockutils [req-64f1c8d6-5a4e-4f35-b3d7-8b70d8b55fc9 req-1cbff18e-31a5-4576-922c-bb398c2a99df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-00f85c39-8b23-4556-9fbf-d806c690135c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:02:27 np0005480824 nova_compute[260089]: 2025-10-11 04:02:27.640 2 DEBUG oslo_concurrency.lockutils [req-64f1c8d6-5a4e-4f35-b3d7-8b70d8b55fc9 req-1cbff18e-31a5-4576-922c-bb398c2a99df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-00f85c39-8b23-4556-9fbf-d806c690135c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:02:27 np0005480824 nova_compute[260089]: 2025-10-11 04:02:27.640 2 DEBUG nova.network.neutron [req-64f1c8d6-5a4e-4f35-b3d7-8b70d8b55fc9 req-1cbff18e-31a5-4576-922c-bb398c2a99df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Refreshing network info cache for port ec74708b-8329-48e4-b5f9-09af33e086f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 00:02:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:02:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:02:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:02:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:02:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:02:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:02:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_04:02:27
Oct 11 00:02:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 00:02:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 11 00:02:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'vms', '.rgw.root', 'volumes', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data']
Oct 11 00:02:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 11 00:02:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 00:02:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:02:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 00:02:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:02:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:02:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:02:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:02:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:02:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:02:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:02:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:02:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/987794459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:02:28 np0005480824 nova_compute[260089]: 2025-10-11 04:02:28.724 2 DEBUG nova.network.neutron [req-64f1c8d6-5a4e-4f35-b3d7-8b70d8b55fc9 req-1cbff18e-31a5-4576-922c-bb398c2a99df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Updated VIF entry in instance network info cache for port ec74708b-8329-48e4-b5f9-09af33e086f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 00:02:28 np0005480824 nova_compute[260089]: 2025-10-11 04:02:28.725 2 DEBUG nova.network.neutron [req-64f1c8d6-5a4e-4f35-b3d7-8b70d8b55fc9 req-1cbff18e-31a5-4576-922c-bb398c2a99df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Updating instance_info_cache with network_info: [{"id": "ec74708b-8329-48e4-b5f9-09af33e086f9", "address": "fa:16:3e:71:61:be", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec74708b-83", "ovs_interfaceid": "ec74708b-8329-48e4-b5f9-09af33e086f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:02:28 np0005480824 nova_compute[260089]: 2025-10-11 04:02:28.744 2 DEBUG oslo_concurrency.lockutils [req-64f1c8d6-5a4e-4f35-b3d7-8b70d8b55fc9 req-1cbff18e-31a5-4576-922c-bb398c2a99df 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-00f85c39-8b23-4556-9fbf-d806c690135c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:02:28 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 20 KiB/s wr, 149 op/s
Oct 11 00:02:29 np0005480824 nova_compute[260089]: 2025-10-11 04:02:29.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e460 do_prune osdmap full prune enabled
Oct 11 00:02:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e461 e461: 3 total, 3 up, 3 in
Oct 11 00:02:29 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e461: 3 total, 3 up, 3 in
Oct 11 00:02:29 np0005480824 nova_compute[260089]: 2025-10-11 04:02:29.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e461 do_prune osdmap full prune enabled
Oct 11 00:02:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e462 e462: 3 total, 3 up, 3 in
Oct 11 00:02:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e462: 3 total, 3 up, 3 in
Oct 11 00:02:30 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 682 B/s wr, 129 op/s
Oct 11 00:02:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e462 do_prune osdmap full prune enabled
Oct 11 00:02:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e463 e463: 3 total, 3 up, 3 in
Oct 11 00:02:32 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e463: 3 total, 3 up, 3 in
Oct 11 00:02:32 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.2 KiB/s wr, 124 op/s
Oct 11 00:02:34 np0005480824 nova_compute[260089]: 2025-10-11 04:02:34.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:34 np0005480824 nova_compute[260089]: 2025-10-11 04:02:34.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e463 do_prune osdmap full prune enabled
Oct 11 00:02:34 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e464 e464: 3 total, 3 up, 3 in
Oct 11 00:02:34 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e464: 3 total, 3 up, 3 in
Oct 11 00:02:34 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.6 KiB/s wr, 39 op/s
Oct 11 00:02:34 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 11 00:02:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:02:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3285059326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:02:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:02:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3285059326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:02:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e464 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:02:36 np0005480824 nova_compute[260089]: 2025-10-11 04:02:36.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:02:36 np0005480824 ovn_controller[152667]: 2025-10-11T04:02:36Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:71:61:be 10.100.0.8
Oct 11 00:02:36 np0005480824 ovn_controller[152667]: 2025-10-11T04:02:36Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:71:61:be 10.100.0.8
Oct 11 00:02:36 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 227 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 506 KiB/s rd, 3.3 MiB/s wr, 82 op/s
Oct 11 00:02:37 np0005480824 nova_compute[260089]: 2025-10-11 04:02:37.293 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002391264976883506 of space, bias 1.0, pg target 0.7173794930650518 quantized to 32 (current 32)
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 00:02:38 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 269 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 870 KiB/s rd, 8.5 MiB/s wr, 190 op/s
Oct 11 00:02:39 np0005480824 nova_compute[260089]: 2025-10-11 04:02:39.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:39 np0005480824 nova_compute[260089]: 2025-10-11 04:02:39.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:02:39 np0005480824 nova_compute[260089]: 2025-10-11 04:02:39.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:40 np0005480824 podman[299826]: 2025-10-11 04:02:40.008644764 +0000 UTC m=+0.067080144 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 11 00:02:40 np0005480824 podman[299827]: 2025-10-11 04:02:40.034071479 +0000 UTC m=+0.083946302 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 11 00:02:40 np0005480824 nova_compute[260089]: 2025-10-11 04:02:40.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:02:40 np0005480824 nova_compute[260089]: 2025-10-11 04:02:40.395 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:40 np0005480824 nova_compute[260089]: 2025-10-11 04:02:40.396 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:40 np0005480824 nova_compute[260089]: 2025-10-11 04:02:40.396 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:40 np0005480824 nova_compute[260089]: 2025-10-11 04:02:40.396 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 00:02:40 np0005480824 nova_compute[260089]: 2025-10-11 04:02:40.397 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e464 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:02:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e464 do_prune osdmap full prune enabled
Oct 11 00:02:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e465 e465: 3 total, 3 up, 3 in
Oct 11 00:02:40 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e465: 3 total, 3 up, 3 in
Oct 11 00:02:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:02:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/596455924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:02:40 np0005480824 nova_compute[260089]: 2025-10-11 04:02:40.841 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:40 np0005480824 nova_compute[260089]: 2025-10-11 04:02:40.912 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 00:02:40 np0005480824 nova_compute[260089]: 2025-10-11 04:02:40.913 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 00:02:40 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 269 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 8.5 MiB/s wr, 162 op/s
Oct 11 00:02:41 np0005480824 nova_compute[260089]: 2025-10-11 04:02:41.164 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:02:41 np0005480824 nova_compute[260089]: 2025-10-11 04:02:41.167 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4148MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 00:02:41 np0005480824 nova_compute[260089]: 2025-10-11 04:02:41.167 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:41 np0005480824 nova_compute[260089]: 2025-10-11 04:02:41.168 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:41 np0005480824 nova_compute[260089]: 2025-10-11 04:02:41.257 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 00f85c39-8b23-4556-9fbf-d806c690135c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 00:02:41 np0005480824 nova_compute[260089]: 2025-10-11 04:02:41.258 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 00:02:41 np0005480824 nova_compute[260089]: 2025-10-11 04:02:41.258 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 00:02:41 np0005480824 nova_compute[260089]: 2025-10-11 04:02:41.294 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:02:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231427547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:02:41 np0005480824 nova_compute[260089]: 2025-10-11 04:02:41.767 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:41 np0005480824 nova_compute[260089]: 2025-10-11 04:02:41.778 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:02:41 np0005480824 nova_compute[260089]: 2025-10-11 04:02:41.912 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:02:42 np0005480824 nova_compute[260089]: 2025-10-11 04:02:42.043 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 00:02:42 np0005480824 nova_compute[260089]: 2025-10-11 04:02:42.044 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e465 do_prune osdmap full prune enabled
Oct 11 00:02:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e466 e466: 3 total, 3 up, 3 in
Oct 11 00:02:42 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e466: 3 total, 3 up, 3 in
Oct 11 00:02:42 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 270 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 8.7 MiB/s wr, 165 op/s
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.044 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.045 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.045 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.298 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.298 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.836 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "refresh_cache-00f85c39-8b23-4556-9fbf-d806c690135c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.836 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquired lock "refresh_cache-00f85c39-8b23-4556-9fbf-d806c690135c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.837 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.837 2 DEBUG nova.objects.instance [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 00f85c39-8b23-4556-9fbf-d806c690135c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.964 2 DEBUG oslo_concurrency.lockutils [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "00f85c39-8b23-4556-9fbf-d806c690135c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.965 2 DEBUG oslo_concurrency.lockutils [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.965 2 DEBUG oslo_concurrency.lockutils [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.966 2 DEBUG oslo_concurrency.lockutils [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.966 2 DEBUG oslo_concurrency.lockutils [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.967 2 INFO nova.compute.manager [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Terminating instance#033[00m
Oct 11 00:02:43 np0005480824 nova_compute[260089]: 2025-10-11 04:02:43.968 2 DEBUG nova.compute.manager [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 00:02:44 np0005480824 kernel: tapec74708b-83 (unregistering): left promiscuous mode
Oct 11 00:02:44 np0005480824 NetworkManager[44969]: <info>  [1760155364.0263] device (tapec74708b-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:44 np0005480824 ovn_controller[152667]: 2025-10-11T04:02:44Z|00244|binding|INFO|Releasing lport ec74708b-8329-48e4-b5f9-09af33e086f9 from this chassis (sb_readonly=0)
Oct 11 00:02:44 np0005480824 ovn_controller[152667]: 2025-10-11T04:02:44Z|00245|binding|INFO|Setting lport ec74708b-8329-48e4-b5f9-09af33e086f9 down in Southbound
Oct 11 00:02:44 np0005480824 ovn_controller[152667]: 2025-10-11T04:02:44Z|00246|binding|INFO|Removing iface tapec74708b-83 ovn-installed in OVS
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:44.069 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:61:be 10.100.0.8'], port_security=['fa:16:3e:71:61:be 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '00f85c39-8b23-4556-9fbf-d806c690135c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e73ded2f2ee46b4a7485c01ef1b73e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0a0daf4-5fac-406b-b8da-5df24a392041', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f608fb9-f693-4a11-9617-6172f3d025df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=ec74708b-8329-48e4-b5f9-09af33e086f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:02:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:44.072 162245 INFO neutron.agent.ovn.metadata.agent [-] Port ec74708b-8329-48e4-b5f9-09af33e086f9 in datapath 15a62ee0-8e34-4e49-990e-246b4ef9e0c6 unbound from our chassis#033[00m
Oct 11 00:02:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:44.075 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15a62ee0-8e34-4e49-990e-246b4ef9e0c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 00:02:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:44.077 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[848ae35d-d168-4309-853e-a9ab15dc0140]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:44.078 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 namespace which is not needed anymore#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:44 np0005480824 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct 11 00:02:44 np0005480824 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Consumed 16.466s CPU time.
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:44 np0005480824 systemd-machined[215071]: Machine qemu-26-instance-0000001a terminated.
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.210 2 INFO nova.virt.libvirt.driver [-] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Instance destroyed successfully.#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.210 2 DEBUG nova.objects.instance [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lazy-loading 'resources' on Instance uuid 00f85c39-8b23-4556-9fbf-d806c690135c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.224 2 DEBUG nova.virt.libvirt.vif [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:02:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-513759231',display_name='tempest-TransferEncryptedVolumeTest-server-513759231',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-513759231',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDBxc/yNqNR+6hcns3uIK5nByp3y7/Z4QylmLciPhq6XKUnS3cE8WBipiedmC1KXbIrzQin+vEhjglj/GGa46YEcBJkij9tDpZ0nSurHoQgQFYWBhIhwD65l+TbXzKNAAg==',key_name='tempest-TransferEncryptedVolumeTest-1457663695',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:02:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0e73ded2f2ee46b4a7485c01ef1b73e9',ramdisk_id='',reservation_id='r-1rv0mwa2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1815435088',owner_user_name='tempest-TransferEncryptedVolumeTest-1815435088-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:02:23Z,user_data=None,user_id='eccc3f574d354840901d28dad2488bf4',uuid=00f85c39-8b23-4556-9fbf-d806c690135c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec74708b-8329-48e4-b5f9-09af33e086f9", "address": "fa:16:3e:71:61:be", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec74708b-83", "ovs_interfaceid": "ec74708b-8329-48e4-b5f9-09af33e086f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.225 2 DEBUG nova.network.os_vif_util [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converting VIF {"id": "ec74708b-8329-48e4-b5f9-09af33e086f9", "address": "fa:16:3e:71:61:be", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec74708b-83", "ovs_interfaceid": "ec74708b-8329-48e4-b5f9-09af33e086f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.226 2 DEBUG nova.network.os_vif_util [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:61:be,bridge_name='br-int',has_traffic_filtering=True,id=ec74708b-8329-48e4-b5f9-09af33e086f9,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec74708b-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.226 2 DEBUG os_vif [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:61:be,bridge_name='br-int',has_traffic_filtering=True,id=ec74708b-8329-48e4-b5f9-09af33e086f9,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec74708b-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec74708b-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.235 2 INFO os_vif [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:61:be,bridge_name='br-int',has_traffic_filtering=True,id=ec74708b-8329-48e4-b5f9-09af33e086f9,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec74708b-83')#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.272 2 DEBUG nova.compute.manager [req-3b223b76-8a22-4651-ad6a-3210bde164ea req-fefb7a58-8cbc-47d1-a728-0c3aa56647e3 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Received event network-vif-unplugged-ec74708b-8329-48e4-b5f9-09af33e086f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.273 2 DEBUG oslo_concurrency.lockutils [req-3b223b76-8a22-4651-ad6a-3210bde164ea req-fefb7a58-8cbc-47d1-a728-0c3aa56647e3 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.273 2 DEBUG oslo_concurrency.lockutils [req-3b223b76-8a22-4651-ad6a-3210bde164ea req-fefb7a58-8cbc-47d1-a728-0c3aa56647e3 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.273 2 DEBUG oslo_concurrency.lockutils [req-3b223b76-8a22-4651-ad6a-3210bde164ea req-fefb7a58-8cbc-47d1-a728-0c3aa56647e3 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.274 2 DEBUG nova.compute.manager [req-3b223b76-8a22-4651-ad6a-3210bde164ea req-fefb7a58-8cbc-47d1-a728-0c3aa56647e3 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] No waiting events found dispatching network-vif-unplugged-ec74708b-8329-48e4-b5f9-09af33e086f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.274 2 DEBUG nova.compute.manager [req-3b223b76-8a22-4651-ad6a-3210bde164ea req-fefb7a58-8cbc-47d1-a728-0c3aa56647e3 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Received event network-vif-unplugged-ec74708b-8329-48e4-b5f9-09af33e086f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 00:02:44 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[299785]: [NOTICE]   (299790) : haproxy version is 2.8.14-c23fe91
Oct 11 00:02:44 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[299785]: [NOTICE]   (299790) : path to executable is /usr/sbin/haproxy
Oct 11 00:02:44 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[299785]: [WARNING]  (299790) : Exiting Master process...
Oct 11 00:02:44 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[299785]: [ALERT]    (299790) : Current worker (299792) exited with code 143 (Terminated)
Oct 11 00:02:44 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[299785]: [WARNING]  (299790) : All workers exited. Exiting... (0)
Oct 11 00:02:44 np0005480824 systemd[1]: libpod-c8b6edf67204e198fa45105bb48993f47c2000791bdb828e5f6e988d45b646b6.scope: Deactivated successfully.
Oct 11 00:02:44 np0005480824 podman[299943]: 2025-10-11 04:02:44.306820997 +0000 UTC m=+0.082642383 container died c8b6edf67204e198fa45105bb48993f47c2000791bdb828e5f6e988d45b646b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 00:02:44 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c8b6edf67204e198fa45105bb48993f47c2000791bdb828e5f6e988d45b646b6-userdata-shm.mount: Deactivated successfully.
Oct 11 00:02:44 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a1b73024168e1dc657a293ca31d3b770ed29516602183c0e67c43363586fc683-merged.mount: Deactivated successfully.
Oct 11 00:02:44 np0005480824 podman[299943]: 2025-10-11 04:02:44.353801008 +0000 UTC m=+0.129622354 container cleanup c8b6edf67204e198fa45105bb48993f47c2000791bdb828e5f6e988d45b646b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 00:02:44 np0005480824 systemd[1]: libpod-conmon-c8b6edf67204e198fa45105bb48993f47c2000791bdb828e5f6e988d45b646b6.scope: Deactivated successfully.
Oct 11 00:02:44 np0005480824 podman[299996]: 2025-10-11 04:02:44.433914531 +0000 UTC m=+0.053267097 container remove c8b6edf67204e198fa45105bb48993f47c2000791bdb828e5f6e988d45b646b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:02:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:44.442 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad16b61-911e-428f-8bd9-2c847060ee23]: (4, ('Sat Oct 11 04:02:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 (c8b6edf67204e198fa45105bb48993f47c2000791bdb828e5f6e988d45b646b6)\nc8b6edf67204e198fa45105bb48993f47c2000791bdb828e5f6e988d45b646b6\nSat Oct 11 04:02:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 (c8b6edf67204e198fa45105bb48993f47c2000791bdb828e5f6e988d45b646b6)\nc8b6edf67204e198fa45105bb48993f47c2000791bdb828e5f6e988d45b646b6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:44.444 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c4d1e6-f9e7-4a2f-bba8-1640e6fc699e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:44.445 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15a62ee0-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:02:44 np0005480824 kernel: tap15a62ee0-80: left promiscuous mode
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.447 2 INFO nova.virt.libvirt.driver [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Deleting instance files /var/lib/nova/instances/00f85c39-8b23-4556-9fbf-d806c690135c_del#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.450 2 INFO nova.virt.libvirt.driver [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Deletion of /var/lib/nova/instances/00f85c39-8b23-4556-9fbf-d806c690135c_del complete#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:44.470 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e780afaa-48f1-4955-bf29-7b3e8817c6a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:44.495 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[530dbd99-d33e-4c79-999a-3d6509113000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:44.496 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[f61c380a-cd3c-4359-9d76-689704d4a207]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:44.512 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ba57c3e6-14ca-4ad5-b270-1f120094b1ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480796, 'reachable_time': 42940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300012, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:44.514 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 00:02:44 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:02:44.514 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[1d2d597f-e8d8-4716-a023-045ad680ce58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:44 np0005480824 systemd[1]: run-netns-ovnmeta\x2d15a62ee0\x2d8e34\x2d4e49\x2d990e\x2d246b4ef9e0c6.mount: Deactivated successfully.
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.518 2 INFO nova.compute.manager [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Took 0.55 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.518 2 DEBUG oslo.service.loopingcall [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.519 2 DEBUG nova.compute.manager [-] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.519 2 DEBUG nova.network.neutron [-] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 00:02:44 np0005480824 nova_compute[260089]: 2025-10-11 04:02:44.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e466 do_prune osdmap full prune enabled
Oct 11 00:02:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e467 e467: 3 total, 3 up, 3 in
Oct 11 00:02:44 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e467: 3 total, 3 up, 3 in
Oct 11 00:02:44 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 270 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 226 KiB/s wr, 4 op/s
Oct 11 00:02:45 np0005480824 nova_compute[260089]: 2025-10-11 04:02:45.121 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Updating instance_info_cache with network_info: [{"id": "ec74708b-8329-48e4-b5f9-09af33e086f9", "address": "fa:16:3e:71:61:be", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec74708b-83", "ovs_interfaceid": "ec74708b-8329-48e4-b5f9-09af33e086f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:02:45 np0005480824 nova_compute[260089]: 2025-10-11 04:02:45.138 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Releasing lock "refresh_cache-00f85c39-8b23-4556-9fbf-d806c690135c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:02:45 np0005480824 nova_compute[260089]: 2025-10-11 04:02:45.139 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 00:02:45 np0005480824 nova_compute[260089]: 2025-10-11 04:02:45.139 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:02:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e467 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.057 2 DEBUG nova.network.neutron [-] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.079 2 INFO nova.compute.manager [-] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Took 1.56 seconds to deallocate network for instance.#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.283 2 INFO nova.compute.manager [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Took 0.20 seconds to detach 1 volumes for instance.#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.336 2 DEBUG oslo_concurrency.lockutils [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.337 2 DEBUG oslo_concurrency.lockutils [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.357 2 DEBUG nova.compute.manager [req-9cfa6ad9-166d-4e9e-97d8-518cb873382d req-fd23f97b-f736-4aff-bda5-291b9900fc98 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Received event network-vif-plugged-ec74708b-8329-48e4-b5f9-09af33e086f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.358 2 DEBUG oslo_concurrency.lockutils [req-9cfa6ad9-166d-4e9e-97d8-518cb873382d req-fd23f97b-f736-4aff-bda5-291b9900fc98 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.358 2 DEBUG oslo_concurrency.lockutils [req-9cfa6ad9-166d-4e9e-97d8-518cb873382d req-fd23f97b-f736-4aff-bda5-291b9900fc98 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.359 2 DEBUG oslo_concurrency.lockutils [req-9cfa6ad9-166d-4e9e-97d8-518cb873382d req-fd23f97b-f736-4aff-bda5-291b9900fc98 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.359 2 DEBUG nova.compute.manager [req-9cfa6ad9-166d-4e9e-97d8-518cb873382d req-fd23f97b-f736-4aff-bda5-291b9900fc98 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] No waiting events found dispatching network-vif-plugged-ec74708b-8329-48e4-b5f9-09af33e086f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.359 2 WARNING nova.compute.manager [req-9cfa6ad9-166d-4e9e-97d8-518cb873382d req-fd23f97b-f736-4aff-bda5-291b9900fc98 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Received unexpected event network-vif-plugged-ec74708b-8329-48e4-b5f9-09af33e086f9 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.360 2 DEBUG nova.compute.manager [req-9cfa6ad9-166d-4e9e-97d8-518cb873382d req-fd23f97b-f736-4aff-bda5-291b9900fc98 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Received event network-vif-deleted-ec74708b-8329-48e4-b5f9-09af33e086f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.403 2 DEBUG oslo_concurrency.processutils [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:02:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1294850623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.883 2 DEBUG oslo_concurrency.processutils [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.892 2 DEBUG nova.compute.provider_tree [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:02:46 np0005480824 nova_compute[260089]: 2025-10-11 04:02:46.979 2 DEBUG nova.scheduler.client.report [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:02:46 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 270 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 219 KiB/s wr, 45 op/s
Oct 11 00:02:47 np0005480824 nova_compute[260089]: 2025-10-11 04:02:47.060 2 DEBUG oslo_concurrency.lockutils [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:47 np0005480824 nova_compute[260089]: 2025-10-11 04:02:47.168 2 INFO nova.scheduler.client.report [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Deleted allocations for instance 00f85c39-8b23-4556-9fbf-d806c690135c#033[00m
Oct 11 00:02:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:02:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2305159284' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:02:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:02:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2305159284' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:02:47 np0005480824 nova_compute[260089]: 2025-10-11 04:02:47.413 2 DEBUG oslo_concurrency.lockutils [None req-4569c58f-d3aa-42b4-8ba3-7506172ddc6f eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "00f85c39-8b23-4556-9fbf-d806c690135c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:48 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 270 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 193 KiB/s wr, 58 op/s
Oct 11 00:02:49 np0005480824 podman[300035]: 2025-10-11 04:02:49.051829679 +0000 UTC m=+0.116185534 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 00:02:49 np0005480824 nova_compute[260089]: 2025-10-11 04:02:49.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:49 np0005480824 nova_compute[260089]: 2025-10-11 04:02:49.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e467 do_prune osdmap full prune enabled
Oct 11 00:02:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e468 e468: 3 total, 3 up, 3 in
Oct 11 00:02:50 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e468: 3 total, 3 up, 3 in
Oct 11 00:02:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e468 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:02:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e468 do_prune osdmap full prune enabled
Oct 11 00:02:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e469 e469: 3 total, 3 up, 3 in
Oct 11 00:02:50 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e469: 3 total, 3 up, 3 in
Oct 11 00:02:50 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 270 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 30 KiB/s wr, 71 op/s
Oct 11 00:02:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e469 do_prune osdmap full prune enabled
Oct 11 00:02:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e470 e470: 3 total, 3 up, 3 in
Oct 11 00:02:52 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e470: 3 total, 3 up, 3 in
Oct 11 00:02:52 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 270 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 31 KiB/s wr, 76 op/s
Oct 11 00:02:54 np0005480824 nova_compute[260089]: 2025-10-11 04:02:54.194 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "1364751a-4bbf-49e1-abe3-f702f03be8e3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:54 np0005480824 nova_compute[260089]: 2025-10-11 04:02:54.194 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:54 np0005480824 nova_compute[260089]: 2025-10-11 04:02:54.222 2 DEBUG nova.compute.manager [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 00:02:54 np0005480824 nova_compute[260089]: 2025-10-11 04:02:54.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:02:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/686979045' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:02:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:02:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/686979045' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:02:54 np0005480824 nova_compute[260089]: 2025-10-11 04:02:54.438 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:54 np0005480824 nova_compute[260089]: 2025-10-11 04:02:54.439 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:54 np0005480824 nova_compute[260089]: 2025-10-11 04:02:54.447 2 DEBUG nova.virt.hardware [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 00:02:54 np0005480824 nova_compute[260089]: 2025-10-11 04:02:54.448 2 INFO nova.compute.claims [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 00:02:54 np0005480824 nova_compute[260089]: 2025-10-11 04:02:54.581 2 DEBUG oslo_concurrency.processutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:54 np0005480824 nova_compute[260089]: 2025-10-11 04:02:54.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:54 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 270 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.7 KiB/s wr, 46 op/s
Oct 11 00:02:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:02:55 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1339412898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.039 2 DEBUG oslo_concurrency.processutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.047 2 DEBUG nova.compute.provider_tree [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.084 2 DEBUG nova.scheduler.client.report [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.282 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.282 2 DEBUG nova.compute.manager [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.520 2 DEBUG nova.compute.manager [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.521 2 DEBUG nova.network.neutron [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.564 2 INFO nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.616 2 DEBUG nova.compute.manager [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.701 2 INFO nova.virt.block_device [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Booting with volume add0a15d-17c4-4d18-981c-95d26fc9243b at /dev/vda#033[00m
Oct 11 00:02:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e470 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.719 2 DEBUG nova.policy [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eccc3f574d354840901d28dad2488bf4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0e73ded2f2ee46b4a7485c01ef1b73e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.855 2 DEBUG os_brick.utils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.858 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.874 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.874 676 DEBUG oslo.privsep.daemon [-] privsep: reply[bc94dc1b-66aa-43a8-8837-52a88dd383fb]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.876 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.888 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.889 676 DEBUG oslo.privsep.daemon [-] privsep: reply[6580497e-3192-4e5e-b465-7c9293f3053a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.891 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.908 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.909 676 DEBUG oslo.privsep.daemon [-] privsep: reply[f24d4df8-ba9d-47b8-b7d2-ebe3e7f5b093]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.912 676 DEBUG oslo.privsep.daemon [-] privsep: reply[1155eaf6-6723-4431-b484-17cc9e4fee19]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.913 2 DEBUG oslo_concurrency.processutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.958 2 DEBUG oslo_concurrency.processutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "nvme version" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.964 2 DEBUG os_brick.initiator.connectors.lightos [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.965 2 DEBUG os_brick.initiator.connectors.lightos [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.965 2 DEBUG os_brick.initiator.connectors.lightos [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.966 2 DEBUG os_brick.utils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] <== get_connector_properties: return (109ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 11 00:02:55 np0005480824 nova_compute[260089]: 2025-10-11 04:02:55.967 2 DEBUG nova.virt.block_device [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Updating existing volume attachment record: e4e31d3f-1ce7-4704-ab36-441570c269dc _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 11 00:02:56 np0005480824 podman[300090]: 2025-10-11 04:02:56.019133358 +0000 UTC m=+0.077933984 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 00:02:56 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:02:56 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/103949296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:02:56 np0005480824 nova_compute[260089]: 2025-10-11 04:02:56.970 2 DEBUG nova.network.neutron [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Successfully created port: e1ac33cf-472c-41ba-b3ed-459749e87ead _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 00:02:56 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 270 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.0 KiB/s wr, 67 op/s
Oct 11 00:02:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e470 do_prune osdmap full prune enabled
Oct 11 00:02:57 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e471 e471: 3 total, 3 up, 3 in
Oct 11 00:02:57 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e471: 3 total, 3 up, 3 in
Oct 11 00:02:57 np0005480824 nova_compute[260089]: 2025-10-11 04:02:57.505 2 DEBUG nova.compute.manager [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 00:02:57 np0005480824 nova_compute[260089]: 2025-10-11 04:02:57.507 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 00:02:57 np0005480824 nova_compute[260089]: 2025-10-11 04:02:57.507 2 INFO nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Creating image(s)#033[00m
Oct 11 00:02:57 np0005480824 nova_compute[260089]: 2025-10-11 04:02:57.507 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 11 00:02:57 np0005480824 nova_compute[260089]: 2025-10-11 04:02:57.508 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Ensure instance console log exists: /var/lib/nova/instances/1364751a-4bbf-49e1-abe3-f702f03be8e3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 00:02:57 np0005480824 nova_compute[260089]: 2025-10-11 04:02:57.508 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:02:57 np0005480824 nova_compute[260089]: 2025-10-11 04:02:57.508 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:02:57 np0005480824 nova_compute[260089]: 2025-10-11 04:02:57.509 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:02:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:02:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:02:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:02:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:02:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:02:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:02:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e471 do_prune osdmap full prune enabled
Oct 11 00:02:58 np0005480824 nova_compute[260089]: 2025-10-11 04:02:58.212 2 DEBUG nova.network.neutron [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Successfully updated port: e1ac33cf-472c-41ba-b3ed-459749e87ead _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 00:02:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e472 e472: 3 total, 3 up, 3 in
Oct 11 00:02:58 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e472: 3 total, 3 up, 3 in
Oct 11 00:02:58 np0005480824 nova_compute[260089]: 2025-10-11 04:02:58.232 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "refresh_cache-1364751a-4bbf-49e1-abe3-f702f03be8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:02:58 np0005480824 nova_compute[260089]: 2025-10-11 04:02:58.233 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquired lock "refresh_cache-1364751a-4bbf-49e1-abe3-f702f03be8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:02:58 np0005480824 nova_compute[260089]: 2025-10-11 04:02:58.233 2 DEBUG nova.network.neutron [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 00:02:58 np0005480824 nova_compute[260089]: 2025-10-11 04:02:58.303 2 DEBUG nova.compute.manager [req-00a167b1-a7a5-4319-8472-98db93c081aa req-ff06135b-82bd-4a38-9c99-4014525bbc10 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Received event network-changed-e1ac33cf-472c-41ba-b3ed-459749e87ead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:02:58 np0005480824 nova_compute[260089]: 2025-10-11 04:02:58.304 2 DEBUG nova.compute.manager [req-00a167b1-a7a5-4319-8472-98db93c081aa req-ff06135b-82bd-4a38-9c99-4014525bbc10 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Refreshing instance network info cache due to event network-changed-e1ac33cf-472c-41ba-b3ed-459749e87ead. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 00:02:58 np0005480824 nova_compute[260089]: 2025-10-11 04:02:58.304 2 DEBUG oslo_concurrency.lockutils [req-00a167b1-a7a5-4319-8472-98db93c081aa req-ff06135b-82bd-4a38-9c99-4014525bbc10 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-1364751a-4bbf-49e1-abe3-f702f03be8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:02:58 np0005480824 nova_compute[260089]: 2025-10-11 04:02:58.375 2 DEBUG nova.network.neutron [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 00:02:58 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 270 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.6 KiB/s wr, 35 op/s
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.094 2 DEBUG nova.network.neutron [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Updating instance_info_cache with network_info: [{"id": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "address": "fa:16:3e:da:72:de", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1ac33cf-47", "ovs_interfaceid": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.115 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Releasing lock "refresh_cache-1364751a-4bbf-49e1-abe3-f702f03be8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.116 2 DEBUG nova.compute.manager [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Instance network_info: |[{"id": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "address": "fa:16:3e:da:72:de", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1ac33cf-47", "ovs_interfaceid": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.116 2 DEBUG oslo_concurrency.lockutils [req-00a167b1-a7a5-4319-8472-98db93c081aa req-ff06135b-82bd-4a38-9c99-4014525bbc10 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-1364751a-4bbf-49e1-abe3-f702f03be8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.117 2 DEBUG nova.network.neutron [req-00a167b1-a7a5-4319-8472-98db93c081aa req-ff06135b-82bd-4a38-9c99-4014525bbc10 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Refreshing network info cache for port e1ac33cf-472c-41ba-b3ed-459749e87ead _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.120 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Start _get_guest_xml network_info=[{"id": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "address": "fa:16:3e:da:72:de", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1ac33cf-47", "ovs_interfaceid": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'e4e31d3f-1ce7-4704-ab36-441570c269dc', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-add0a15d-17c4-4d18-981c-95d26fc9243b', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'add0a15d-17c4-4d18-981c-95d26fc9243b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '1364751a-4bbf-49e1-abe3-f702f03be8e3', 'attached_at': '', 'detached_at': '', 'volume_id': 'add0a15d-17c4-4d18-981c-95d26fc9243b', 'serial': 'add0a15d-17c4-4d18-981c-95d26fc9243b'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.128 2 WARNING nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.143 2 DEBUG nova.virt.libvirt.host [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.145 2 DEBUG nova.virt.libvirt.host [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.149 2 DEBUG nova.virt.libvirt.host [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.149 2 DEBUG nova.virt.libvirt.host [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.151 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.151 2 DEBUG nova.virt.hardware [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.152 2 DEBUG nova.virt.hardware [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.153 2 DEBUG nova.virt.hardware [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.153 2 DEBUG nova.virt.hardware [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.154 2 DEBUG nova.virt.hardware [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.154 2 DEBUG nova.virt.hardware [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.155 2 DEBUG nova.virt.hardware [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.155 2 DEBUG nova.virt.hardware [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.156 2 DEBUG nova.virt.hardware [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.156 2 DEBUG nova.virt.hardware [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.157 2 DEBUG nova.virt.hardware [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.199 2 DEBUG nova.storage.rbd_utils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] rbd image 1364751a-4bbf-49e1-abe3-f702f03be8e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.208 2 DEBUG oslo_concurrency.processutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.266 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155364.2073023, 00f85c39-8b23-4556-9fbf-d806c690135c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.266 2 INFO nova.compute.manager [-] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] VM Stopped (Lifecycle Event)#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.288 2 DEBUG nova.compute.manager [None req-36225bc1-a424-4b1f-b676-66940011f91f - - - - - -] [instance: 00f85c39-8b23-4556-9fbf-d806c690135c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:02:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:02:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2934944768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:02:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:02:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2934944768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:02:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:02:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1714847430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.686 2 DEBUG oslo_concurrency.processutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.846 2 DEBUG os_brick.encryptors [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Using volume encryption metadata '{'encryption_key_id': 'f6af2d3c-807b-4d78-8e85-77aeb70a7240', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-add0a15d-17c4-4d18-981c-95d26fc9243b', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'add0a15d-17c4-4d18-981c-95d26fc9243b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '1364751a-4bbf-49e1-abe3-f702f03be8e3', 'attached_at': '', 'detached_at': '', 'volume_id': 'add0a15d-17c4-4d18-981c-95d26fc9243b', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.849 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.864 2 DEBUG barbicanclient.v1.secrets [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.865 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.883 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.884 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.904 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.905 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.923 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.923 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.948 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.949 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.973 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:02:59 np0005480824 nova_compute[260089]: 2025-10-11 04:02:59.974 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.001 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.002 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.025 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.026 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.049 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.050 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.073 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.074 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.098 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.098 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.120 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.121 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.148 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.149 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.172 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.173 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.195 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.196 2 INFO barbicanclient.base [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Calculated Secrets uuid ref: secrets/f6af2d3c-807b-4d78-8e85-77aeb70a7240#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.216 2 DEBUG barbicanclient.client [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.217 2 DEBUG nova.virt.libvirt.host [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  <usage type="volume">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <volume>add0a15d-17c4-4d18-981c-95d26fc9243b</volume>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  </usage>
Oct 11 00:03:00 np0005480824 nova_compute[260089]: </secret>
Oct 11 00:03:00 np0005480824 nova_compute[260089]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.261 2 DEBUG nova.virt.libvirt.vif [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1319088656',display_name='tempest-TransferEncryptedVolumeTest-server-1319088656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1319088656',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDBxc/yNqNR+6hcns3uIK5nByp3y7/Z4QylmLciPhq6XKUnS3cE8WBipiedmC1KXbIrzQin+vEhjglj/GGa46YEcBJkij9tDpZ0nSurHoQgQFYWBhIhwD65l+TbXzKNAAg==',key_name='tempest-TransferEncryptedVolumeTest-1457663695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e73ded2f2ee46b4a7485c01ef1b73e9',ramdisk_id='',reservation_id='r-atzkzh09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1815435088',owner_user_name='tempest-TransferEncryptedVolumeTest-1815435088-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:02:55Z,user_data=None,user_id='eccc3f574d354840901d28dad2488bf4',uuid=1364751a-4bbf-49e1-abe3-f702f03be8e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "address": "fa:16:3e:da:72:de", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1ac33cf-47", "ovs_interfaceid": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.261 2 DEBUG nova.network.os_vif_util [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converting VIF {"id": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "address": "fa:16:3e:da:72:de", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1ac33cf-47", "ovs_interfaceid": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.262 2 DEBUG nova.network.os_vif_util [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:72:de,bridge_name='br-int',has_traffic_filtering=True,id=e1ac33cf-472c-41ba-b3ed-459749e87ead,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1ac33cf-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.263 2 DEBUG nova.objects.instance [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1364751a-4bbf-49e1-abe3-f702f03be8e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.279 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] End _get_guest_xml xml=<domain type="kvm">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  <uuid>1364751a-4bbf-49e1-abe3-f702f03be8e3</uuid>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  <name>instance-0000001b</name>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  <metadata>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1319088656</nova:name>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 04:02:59</nova:creationTime>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:        <nova:user uuid="eccc3f574d354840901d28dad2488bf4">tempest-TransferEncryptedVolumeTest-1815435088-project-member</nova:user>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:        <nova:project uuid="0e73ded2f2ee46b4a7485c01ef1b73e9">tempest-TransferEncryptedVolumeTest-1815435088</nova:project>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:        <nova:port uuid="e1ac33cf-472c-41ba-b3ed-459749e87ead">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:        </nova:port>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  </metadata>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <system>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <entry name="serial">1364751a-4bbf-49e1-abe3-f702f03be8e3</entry>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <entry name="uuid">1364751a-4bbf-49e1-abe3-f702f03be8e3</entry>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    </system>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  <os>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  </os>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  <features>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <acpi/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <apic/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  </features>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  </clock>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  </cpu>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  <devices>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/1364751a-4bbf-49e1-abe3-f702f03be8e3_disk.config">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      </source>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      </auth>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    </disk>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-add0a15d-17c4-4d18-981c-95d26fc9243b">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      </source>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      </auth>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <serial>add0a15d-17c4-4d18-981c-95d26fc9243b</serial>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <encryption format="luks">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:        <secret type="passphrase" uuid="9de57ca4-51db-451a-8909-829fe5e96de2"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      </encryption>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    </disk>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:da:72:de"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <target dev="tape1ac33cf-47"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    </interface>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/1364751a-4bbf-49e1-abe3-f702f03be8e3/console.log" append="off"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    </serial>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <video>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    </video>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    </rng>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 11 00:03:00 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:    </memballoon>
Oct 11 00:03:00 np0005480824 nova_compute[260089]:  </devices>
Oct 11 00:03:00 np0005480824 nova_compute[260089]: </domain>
Oct 11 00:03:00 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.280 2 DEBUG nova.compute.manager [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Preparing to wait for external event network-vif-plugged-e1ac33cf-472c-41ba-b3ed-459749e87ead prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.280 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.280 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.281 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.281 2 DEBUG nova.virt.libvirt.vif [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1319088656',display_name='tempest-TransferEncryptedVolumeTest-server-1319088656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1319088656',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDBxc/yNqNR+6hcns3uIK5nByp3y7/Z4QylmLciPhq6XKUnS3cE8WBipiedmC1KXbIrzQin+vEhjglj/GGa46YEcBJkij9tDpZ0nSurHoQgQFYWBhIhwD65l+TbXzKNAAg==',key_name='tempest-TransferEncryptedVolumeTest-1457663695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0e73ded2f2ee46b4a7485c01ef1b73e9',ramdisk_id='',reservation_id='r-atzkzh09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1815435088',owner_user_name='tempest-TransferEncryptedVolumeTest-1815435088-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:02:55Z,user_data=None,user_id='eccc3f574d354840901d28dad2488bf4',uuid=1364751a-4bbf-49e1-abe3-f702f03be8e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "address": "fa:16:3e:da:72:de", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1ac33cf-47", "ovs_interfaceid": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.282 2 DEBUG nova.network.os_vif_util [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converting VIF {"id": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "address": "fa:16:3e:da:72:de", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1ac33cf-47", "ovs_interfaceid": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.282 2 DEBUG nova.network.os_vif_util [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:72:de,bridge_name='br-int',has_traffic_filtering=True,id=e1ac33cf-472c-41ba-b3ed-459749e87ead,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1ac33cf-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.282 2 DEBUG os_vif [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:72:de,bridge_name='br-int',has_traffic_filtering=True,id=e1ac33cf-472c-41ba-b3ed-459749e87ead,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1ac33cf-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.283 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.284 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.286 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1ac33cf-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.287 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape1ac33cf-47, col_values=(('external_ids', {'iface-id': 'e1ac33cf-472c-41ba-b3ed-459749e87ead', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:72:de', 'vm-uuid': '1364751a-4bbf-49e1-abe3-f702f03be8e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:03:00 np0005480824 NetworkManager[44969]: <info>  [1760155380.2909] manager: (tape1ac33cf-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.295 2 INFO os_vif [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:72:de,bridge_name='br-int',has_traffic_filtering=True,id=e1ac33cf-472c-41ba-b3ed-459749e87ead,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1ac33cf-47')#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.342 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.343 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.343 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] No VIF found with MAC fa:16:3e:da:72:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.343 2 INFO nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Using config drive#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.367 2 DEBUG nova.storage.rbd_utils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] rbd image 1364751a-4bbf-49e1-abe3-f702f03be8e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.400 2 DEBUG nova.network.neutron [req-00a167b1-a7a5-4319-8472-98db93c081aa req-ff06135b-82bd-4a38-9c99-4014525bbc10 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Updated VIF entry in instance network info cache for port e1ac33cf-472c-41ba-b3ed-459749e87ead. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.400 2 DEBUG nova.network.neutron [req-00a167b1-a7a5-4319-8472-98db93c081aa req-ff06135b-82bd-4a38-9c99-4014525bbc10 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Updating instance_info_cache with network_info: [{"id": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "address": "fa:16:3e:da:72:de", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1ac33cf-47", "ovs_interfaceid": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.428 2 DEBUG oslo_concurrency.lockutils [req-00a167b1-a7a5-4319-8472-98db93c081aa req-ff06135b-82bd-4a38-9c99-4014525bbc10 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-1364751a-4bbf-49e1-abe3-f702f03be8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.705 2 INFO nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Creating config drive at /var/lib/nova/instances/1364751a-4bbf-49e1-abe3-f702f03be8e3/disk.config#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.712 2 DEBUG oslo_concurrency.processutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1364751a-4bbf-49e1-abe3-f702f03be8e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ps8417y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:03:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e472 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:03:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e472 do_prune osdmap full prune enabled
Oct 11 00:03:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e473 e473: 3 total, 3 up, 3 in
Oct 11 00:03:00 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e473: 3 total, 3 up, 3 in
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.845 2 DEBUG oslo_concurrency.processutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1364751a-4bbf-49e1-abe3-f702f03be8e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ps8417y" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.876 2 DEBUG nova.storage.rbd_utils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] rbd image 1364751a-4bbf-49e1-abe3-f702f03be8e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:03:00 np0005480824 nova_compute[260089]: 2025-10-11 04:03:00.880 2 DEBUG oslo_concurrency.processutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1364751a-4bbf-49e1-abe3-f702f03be8e3/disk.config 1364751a-4bbf-49e1-abe3-f702f03be8e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:03:00 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 270 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.5 KiB/s wr, 37 op/s
Oct 11 00:03:01 np0005480824 nova_compute[260089]: 2025-10-11 04:03:01.037 2 DEBUG oslo_concurrency.processutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1364751a-4bbf-49e1-abe3-f702f03be8e3/disk.config 1364751a-4bbf-49e1-abe3-f702f03be8e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:03:01 np0005480824 nova_compute[260089]: 2025-10-11 04:03:01.038 2 INFO nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Deleting local config drive /var/lib/nova/instances/1364751a-4bbf-49e1-abe3-f702f03be8e3/disk.config because it was imported into RBD.#033[00m
Oct 11 00:03:01 np0005480824 kernel: tape1ac33cf-47: entered promiscuous mode
Oct 11 00:03:01 np0005480824 NetworkManager[44969]: <info>  [1760155381.0984] manager: (tape1ac33cf-47): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Oct 11 00:03:01 np0005480824 nova_compute[260089]: 2025-10-11 04:03:01.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:01 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:01Z|00247|binding|INFO|Claiming lport e1ac33cf-472c-41ba-b3ed-459749e87ead for this chassis.
Oct 11 00:03:01 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:01Z|00248|binding|INFO|e1ac33cf-472c-41ba-b3ed-459749e87ead: Claiming fa:16:3e:da:72:de 10.100.0.6
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.109 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:72:de 10.100.0.6'], port_security=['fa:16:3e:da:72:de 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1364751a-4bbf-49e1-abe3-f702f03be8e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e73ded2f2ee46b4a7485c01ef1b73e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0a0daf4-5fac-406b-b8da-5df24a392041', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f608fb9-f693-4a11-9617-6172f3d025df, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=e1ac33cf-472c-41ba-b3ed-459749e87ead) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.112 162245 INFO neutron.agent.ovn.metadata.agent [-] Port e1ac33cf-472c-41ba-b3ed-459749e87ead in datapath 15a62ee0-8e34-4e49-990e-246b4ef9e0c6 bound to our chassis#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.114 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15a62ee0-8e34-4e49-990e-246b4ef9e0c6#033[00m
Oct 11 00:03:01 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:01Z|00249|binding|INFO|Setting lport e1ac33cf-472c-41ba-b3ed-459749e87ead ovn-installed in OVS
Oct 11 00:03:01 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:01Z|00250|binding|INFO|Setting lport e1ac33cf-472c-41ba-b3ed-459749e87ead up in Southbound
Oct 11 00:03:01 np0005480824 nova_compute[260089]: 2025-10-11 04:03:01.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:01 np0005480824 nova_compute[260089]: 2025-10-11 04:03:01.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.126 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[032c5e01-bc85-471c-9c0d-67adc1eb47bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.127 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap15a62ee0-81 in ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.131 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap15a62ee0-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.131 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e3aa06fc-c539-4952-9f7a-ecb762e1d2a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.132 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0de353-b8b3-46dc-979a-0517ad7a7b6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 systemd-udevd[300223]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.146 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[71c776d5-266c-43b3-ad2f-330db5b6abce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 systemd-machined[215071]: New machine qemu-27-instance-0000001b.
Oct 11 00:03:01 np0005480824 NetworkManager[44969]: <info>  [1760155381.1573] device (tape1ac33cf-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 00:03:01 np0005480824 NetworkManager[44969]: <info>  [1760155381.1580] device (tape1ac33cf-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 00:03:01 np0005480824 systemd[1]: Started Virtual Machine qemu-27-instance-0000001b.
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.165 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[5ddfa9c8-ce84-4394-9563-1c8e147eaa01]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.210 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[03ce88fe-c2ed-4138-9029-e74892108798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.216 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d7735056-316e-4692-b9b5-e8685460155b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 NetworkManager[44969]: <info>  [1760155381.2185] manager: (tap15a62ee0-80): new Veth device (/org/freedesktop/NetworkManager/Devices/134)
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.250 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9e779c-7d7b-4c74-a49d-19e64753dbcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.255 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[4a683aee-8ccc-4894-8b6f-4b412ff127e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 NetworkManager[44969]: <info>  [1760155381.2816] device (tap15a62ee0-80): carrier: link connected
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.290 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[6f973944-cca3-49a4-8a95-1d8d743d2c8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.316 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[59274093-b8ba-401b-95e4-3ab668388773]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15a62ee0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:91:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484894, 'reachable_time': 33970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300256, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.332 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[de95551f-48df-4be5-8d0b-4f52b5a00ff5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:91d9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 484894, 'tstamp': 484894}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300257, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.352 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[efe98a0a-1818-4bdb-9460-913a22abda19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15a62ee0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:91:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484894, 'reachable_time': 33970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300258, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.386 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d0fca7a2-89fd-460f-9d73-7ae2a6a7dcad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.461 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e2206277-3af5-42a6-9e89-fce81b5c5e7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.462 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15a62ee0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.463 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.463 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15a62ee0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:03:01 np0005480824 nova_compute[260089]: 2025-10-11 04:03:01.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:01 np0005480824 kernel: tap15a62ee0-80: entered promiscuous mode
Oct 11 00:03:01 np0005480824 NetworkManager[44969]: <info>  [1760155381.4666] manager: (tap15a62ee0-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Oct 11 00:03:01 np0005480824 nova_compute[260089]: 2025-10-11 04:03:01.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.472 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15a62ee0-80, col_values=(('external_ids', {'iface-id': '182275c4-a015-4f7a-8877-9961b2382f67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:03:01 np0005480824 nova_compute[260089]: 2025-10-11 04:03:01.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:01 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:01Z|00251|binding|INFO|Releasing lport 182275c4-a015-4f7a-8877-9961b2382f67 from this chassis (sb_readonly=0)
Oct 11 00:03:01 np0005480824 nova_compute[260089]: 2025-10-11 04:03:01.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.497 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.498 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d165b94f-8418-4f57-ab22-25203bc72855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.499 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: global
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-15a62ee0-8e34-4e49-990e-246b4ef9e0c6
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.pid.haproxy
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID 15a62ee0-8e34-4e49-990e-246b4ef9e0c6
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 00:03:01 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:01.500 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'env', 'PROCESS_TAG=haproxy-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/15a62ee0-8e34-4e49-990e-246b4ef9e0c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 00:03:01 np0005480824 nova_compute[260089]: 2025-10-11 04:03:01.799 2 DEBUG nova.compute.manager [req-12e397e2-e767-4e6b-8206-c5635af56ed8 req-60f0193c-faa8-4663-8ac9-c259509bf8d2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Received event network-vif-plugged-e1ac33cf-472c-41ba-b3ed-459749e87ead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:03:01 np0005480824 nova_compute[260089]: 2025-10-11 04:03:01.801 2 DEBUG oslo_concurrency.lockutils [req-12e397e2-e767-4e6b-8206-c5635af56ed8 req-60f0193c-faa8-4663-8ac9-c259509bf8d2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:03:01 np0005480824 nova_compute[260089]: 2025-10-11 04:03:01.801 2 DEBUG oslo_concurrency.lockutils [req-12e397e2-e767-4e6b-8206-c5635af56ed8 req-60f0193c-faa8-4663-8ac9-c259509bf8d2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:03:01 np0005480824 nova_compute[260089]: 2025-10-11 04:03:01.802 2 DEBUG oslo_concurrency.lockutils [req-12e397e2-e767-4e6b-8206-c5635af56ed8 req-60f0193c-faa8-4663-8ac9-c259509bf8d2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:03:01 np0005480824 nova_compute[260089]: 2025-10-11 04:03:01.802 2 DEBUG nova.compute.manager [req-12e397e2-e767-4e6b-8206-c5635af56ed8 req-60f0193c-faa8-4663-8ac9-c259509bf8d2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Processing event network-vif-plugged-e1ac33cf-472c-41ba-b3ed-459749e87ead _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 00:03:01 np0005480824 podman[300326]: 2025-10-11 04:03:01.962367366 +0000 UTC m=+0.077335290 container create 3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:03:02 np0005480824 systemd[1]: Started libpod-conmon-3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa.scope.
Oct 11 00:03:02 np0005480824 podman[300326]: 2025-10-11 04:03:01.932882398 +0000 UTC m=+0.047850362 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 00:03:02 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:03:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86fd47f0de7e18fc6da552c8f519eb3296e83d9ed08b6ff49114aab0045b397c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 00:03:02 np0005480824 podman[300326]: 2025-10-11 04:03:02.052887059 +0000 UTC m=+0.167855013 container init 3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 11 00:03:02 np0005480824 podman[300326]: 2025-10-11 04:03:02.058230942 +0000 UTC m=+0.173198866 container start 3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 11 00:03:02 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[300342]: [NOTICE]   (300346) : New worker (300348) forked
Oct 11 00:03:02 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[300342]: [NOTICE]   (300346) : Loading success.
Oct 11 00:03:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e473 do_prune osdmap full prune enabled
Oct 11 00:03:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e474 e474: 3 total, 3 up, 3 in
Oct 11 00:03:02 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e474: 3 total, 3 up, 3 in
Oct 11 00:03:02 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 270 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 29 KiB/s wr, 88 op/s
Oct 11 00:03:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e474 do_prune osdmap full prune enabled
Oct 11 00:03:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e475 e475: 3 total, 3 up, 3 in
Oct 11 00:03:03 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e475: 3 total, 3 up, 3 in
Oct 11 00:03:03 np0005480824 nova_compute[260089]: 2025-10-11 04:03:03.897 2 DEBUG nova.compute.manager [req-696dd983-1257-4f28-b2ea-711bf9defb58 req-6e9dc449-4d53-456d-83a9-b15bc304cfb7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Received event network-vif-plugged-e1ac33cf-472c-41ba-b3ed-459749e87ead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:03:03 np0005480824 nova_compute[260089]: 2025-10-11 04:03:03.898 2 DEBUG oslo_concurrency.lockutils [req-696dd983-1257-4f28-b2ea-711bf9defb58 req-6e9dc449-4d53-456d-83a9-b15bc304cfb7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:03:03 np0005480824 nova_compute[260089]: 2025-10-11 04:03:03.899 2 DEBUG oslo_concurrency.lockutils [req-696dd983-1257-4f28-b2ea-711bf9defb58 req-6e9dc449-4d53-456d-83a9-b15bc304cfb7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:03:03 np0005480824 nova_compute[260089]: 2025-10-11 04:03:03.899 2 DEBUG oslo_concurrency.lockutils [req-696dd983-1257-4f28-b2ea-711bf9defb58 req-6e9dc449-4d53-456d-83a9-b15bc304cfb7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:03:03 np0005480824 nova_compute[260089]: 2025-10-11 04:03:03.899 2 DEBUG nova.compute.manager [req-696dd983-1257-4f28-b2ea-711bf9defb58 req-6e9dc449-4d53-456d-83a9-b15bc304cfb7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] No waiting events found dispatching network-vif-plugged-e1ac33cf-472c-41ba-b3ed-459749e87ead pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:03:03 np0005480824 nova_compute[260089]: 2025-10-11 04:03:03.900 2 WARNING nova.compute.manager [req-696dd983-1257-4f28-b2ea-711bf9defb58 req-6e9dc449-4d53-456d-83a9-b15bc304cfb7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Received unexpected event network-vif-plugged-e1ac33cf-472c-41ba-b3ed-459749e87ead for instance with vm_state building and task_state spawning.#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.459 2 DEBUG nova.compute.manager [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.460 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155384.4589174, 1364751a-4bbf-49e1-abe3-f702f03be8e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.461 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] VM Started (Lifecycle Event)#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.464 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.467 2 INFO nova.virt.libvirt.driver [-] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Instance spawned successfully.#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.467 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.512 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.517 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.518 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.518 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.518 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.519 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.519 2 DEBUG nova.virt.libvirt.driver [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.522 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.622 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.623 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155384.459082, 1364751a-4bbf-49e1-abe3-f702f03be8e3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.623 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] VM Paused (Lifecycle Event)#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.681 2 INFO nova.compute.manager [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Took 7.18 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.682 2 DEBUG nova.compute.manager [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.699 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.702 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155384.4629042, 1364751a-4bbf-49e1-abe3-f702f03be8e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.703 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] VM Resumed (Lifecycle Event)#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.823 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.831 2 INFO nova.compute.manager [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Took 10.41 seconds to build instance.#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.834 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 00:03:04 np0005480824 nova_compute[260089]: 2025-10-11 04:03:04.871 2 DEBUG oslo_concurrency.lockutils [None req-b884fd2b-3ef8-40df-a046-be242100206a eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:03:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:03:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/829016310' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:03:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:03:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/829016310' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:03:04 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 270 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 28 KiB/s wr, 78 op/s
Oct 11 00:03:05 np0005480824 nova_compute[260089]: 2025-10-11 04:03:05.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e475 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:03:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e475 do_prune osdmap full prune enabled
Oct 11 00:03:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e476 e476: 3 total, 3 up, 3 in
Oct 11 00:03:05 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e476: 3 total, 3 up, 3 in
Oct 11 00:03:06 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 270 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 30 KiB/s wr, 163 op/s
Oct 11 00:03:08 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.7 KiB/s wr, 175 op/s
Oct 11 00:03:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e476 do_prune osdmap full prune enabled
Oct 11 00:03:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e477 e477: 3 total, 3 up, 3 in
Oct 11 00:03:09 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e477: 3 total, 3 up, 3 in
Oct 11 00:03:09 np0005480824 nova_compute[260089]: 2025-10-11 04:03:09.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:09 np0005480824 nova_compute[260089]: 2025-10-11 04:03:09.975 2 DEBUG nova.compute.manager [req-a4b1990b-e402-43fd-8e94-3a1775fb0785 req-9a332ffd-7d35-4135-a947-42b34bd52ef8 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Received event network-changed-e1ac33cf-472c-41ba-b3ed-459749e87ead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:03:09 np0005480824 nova_compute[260089]: 2025-10-11 04:03:09.976 2 DEBUG nova.compute.manager [req-a4b1990b-e402-43fd-8e94-3a1775fb0785 req-9a332ffd-7d35-4135-a947-42b34bd52ef8 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Refreshing instance network info cache due to event network-changed-e1ac33cf-472c-41ba-b3ed-459749e87ead. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 00:03:09 np0005480824 nova_compute[260089]: 2025-10-11 04:03:09.977 2 DEBUG oslo_concurrency.lockutils [req-a4b1990b-e402-43fd-8e94-3a1775fb0785 req-9a332ffd-7d35-4135-a947-42b34bd52ef8 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-1364751a-4bbf-49e1-abe3-f702f03be8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:03:09 np0005480824 nova_compute[260089]: 2025-10-11 04:03:09.978 2 DEBUG oslo_concurrency.lockutils [req-a4b1990b-e402-43fd-8e94-3a1775fb0785 req-9a332ffd-7d35-4135-a947-42b34bd52ef8 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-1364751a-4bbf-49e1-abe3-f702f03be8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:03:09 np0005480824 nova_compute[260089]: 2025-10-11 04:03:09.978 2 DEBUG nova.network.neutron [req-a4b1990b-e402-43fd-8e94-3a1775fb0785 req-9a332ffd-7d35-4135-a947-42b34bd52ef8 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Refreshing network info cache for port e1ac33cf-472c-41ba-b3ed-459749e87ead _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 00:03:10 np0005480824 nova_compute[260089]: 2025-10-11 04:03:10.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:10.508 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:03:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:10.509 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:03:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:10.510 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:03:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e477 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:03:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e477 do_prune osdmap full prune enabled
Oct 11 00:03:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e478 e478: 3 total, 3 up, 3 in
Oct 11 00:03:10 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e478: 3 total, 3 up, 3 in
Oct 11 00:03:10 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.8 KiB/s wr, 182 op/s
Oct 11 00:03:11 np0005480824 podman[300364]: 2025-10-11 04:03:11.057728872 +0000 UTC m=+0.099777056 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:03:11 np0005480824 podman[300363]: 2025-10-11 04:03:11.095647285 +0000 UTC m=+0.138420105 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 11 00:03:11 np0005480824 nova_compute[260089]: 2025-10-11 04:03:11.256 2 DEBUG nova.network.neutron [req-a4b1990b-e402-43fd-8e94-3a1775fb0785 req-9a332ffd-7d35-4135-a947-42b34bd52ef8 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Updated VIF entry in instance network info cache for port e1ac33cf-472c-41ba-b3ed-459749e87ead. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 00:03:11 np0005480824 nova_compute[260089]: 2025-10-11 04:03:11.257 2 DEBUG nova.network.neutron [req-a4b1990b-e402-43fd-8e94-3a1775fb0785 req-9a332ffd-7d35-4135-a947-42b34bd52ef8 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Updating instance_info_cache with network_info: [{"id": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "address": "fa:16:3e:da:72:de", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1ac33cf-47", "ovs_interfaceid": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:03:11 np0005480824 nova_compute[260089]: 2025-10-11 04:03:11.422 2 DEBUG oslo_concurrency.lockutils [req-a4b1990b-e402-43fd-8e94-3a1775fb0785 req-9a332ffd-7d35-4135-a947-42b34bd52ef8 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-1364751a-4bbf-49e1-abe3-f702f03be8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:03:12 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.7 KiB/s wr, 191 op/s
Oct 11 00:03:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:03:13 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3726937900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:03:13 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:03:13 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3726937900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:03:14 np0005480824 nova_compute[260089]: 2025-10-11 04:03:14.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:14 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.7 KiB/s wr, 110 op/s
Oct 11 00:03:15 np0005480824 nova_compute[260089]: 2025-10-11 04:03:15.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e478 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:03:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e478 do_prune osdmap full prune enabled
Oct 11 00:03:16 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e479 e479: 3 total, 3 up, 3 in
Oct 11 00:03:16 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e479: 3 total, 3 up, 3 in
Oct 11 00:03:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.5 KiB/s wr, 69 op/s
Oct 11 00:03:17 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:17Z|00064|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.6
Oct 11 00:03:17 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:17Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:da:72:de 10.100.0.6
Oct 11 00:03:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e479 do_prune osdmap full prune enabled
Oct 11 00:03:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e480 e480: 3 total, 3 up, 3 in
Oct 11 00:03:18 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e480: 3 total, 3 up, 3 in
Oct 11 00:03:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:03:18 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/126524679' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:03:18 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:03:18 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/126524679' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:03:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 946 KiB/s rd, 14 KiB/s wr, 122 op/s
Oct 11 00:03:19 np0005480824 podman[300574]: 2025-10-11 04:03:19.150272918 +0000 UTC m=+0.074360562 container exec a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 00:03:19 np0005480824 podman[300574]: 2025-10-11 04:03:19.273991165 +0000 UTC m=+0.198078809 container exec_died a848fe58749db588a5a4b8471e0c9916b9e4a1ccc899f04343e6491a43c45c05 (image=quay.io/ceph/ceph:v18, name=ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:03:19 np0005480824 podman[300606]: 2025-10-11 04:03:19.509525413 +0000 UTC m=+0.159089941 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:03:19 np0005480824 nova_compute[260089]: 2025-10-11 04:03:19.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 00:03:20 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:03:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 00:03:20 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:03:20 np0005480824 nova_compute[260089]: 2025-10-11 04:03:20.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:03:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e480 do_prune osdmap full prune enabled
Oct 11 00:03:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e481 e481: 3 total, 3 up, 3 in
Oct 11 00:03:20 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e481: 3 total, 3 up, 3 in
Oct 11 00:03:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 17 KiB/s wr, 113 op/s
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:03:21 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev f383cce0-0b01-428b-b18f-7d336746d843 does not exist
Oct 11 00:03:21 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev ff4fd1ca-f5fe-4831-a0d8-9c9506f44dae does not exist
Oct 11 00:03:21 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 43b706d3-2a4d-4cd5-baf7-609ac9fed978 does not exist
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:03:21 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:21Z|00066|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.6
Oct 11 00:03:21 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:21Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:da:72:de 10.100.0.6
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:03:21 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:03:21 np0005480824 podman[301033]: 2025-10-11 04:03:21.74623073 +0000 UTC m=+0.037931314 container create c4f08ddcd631bcc39bf84da7825d0df66fd00f7339b2f2c36d8cb808212b2100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rubin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 00:03:21 np0005480824 systemd[1]: Started libpod-conmon-c4f08ddcd631bcc39bf84da7825d0df66fd00f7339b2f2c36d8cb808212b2100.scope.
Oct 11 00:03:21 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:03:21 np0005480824 podman[301033]: 2025-10-11 04:03:21.730002867 +0000 UTC m=+0.021703451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:03:21 np0005480824 podman[301033]: 2025-10-11 04:03:21.837237003 +0000 UTC m=+0.128937627 container init c4f08ddcd631bcc39bf84da7825d0df66fd00f7339b2f2c36d8cb808212b2100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rubin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 00:03:21 np0005480824 podman[301033]: 2025-10-11 04:03:21.847800847 +0000 UTC m=+0.139501451 container start c4f08ddcd631bcc39bf84da7825d0df66fd00f7339b2f2c36d8cb808212b2100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 00:03:21 np0005480824 podman[301033]: 2025-10-11 04:03:21.851755468 +0000 UTC m=+0.143456072 container attach c4f08ddcd631bcc39bf84da7825d0df66fd00f7339b2f2c36d8cb808212b2100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rubin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:03:21 np0005480824 laughing_rubin[301048]: 167 167
Oct 11 00:03:21 np0005480824 systemd[1]: libpod-c4f08ddcd631bcc39bf84da7825d0df66fd00f7339b2f2c36d8cb808212b2100.scope: Deactivated successfully.
Oct 11 00:03:21 np0005480824 podman[301033]: 2025-10-11 04:03:21.854551422 +0000 UTC m=+0.146252016 container died c4f08ddcd631bcc39bf84da7825d0df66fd00f7339b2f2c36d8cb808212b2100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rubin, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:03:21 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ec89a0b261675d035f873bf33233509d1578af487f8a37b9575b494d62180621-merged.mount: Deactivated successfully.
Oct 11 00:03:21 np0005480824 podman[301033]: 2025-10-11 04:03:21.904118512 +0000 UTC m=+0.195819096 container remove c4f08ddcd631bcc39bf84da7825d0df66fd00f7339b2f2c36d8cb808212b2100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:03:21 np0005480824 systemd[1]: libpod-conmon-c4f08ddcd631bcc39bf84da7825d0df66fd00f7339b2f2c36d8cb808212b2100.scope: Deactivated successfully.
Oct 11 00:03:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e481 do_prune osdmap full prune enabled
Oct 11 00:03:22 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e482 e482: 3 total, 3 up, 3 in
Oct 11 00:03:22 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e482: 3 total, 3 up, 3 in
Oct 11 00:03:22 np0005480824 podman[301073]: 2025-10-11 04:03:22.094998403 +0000 UTC m=+0.082751424 container create 9cab0a126458dd91cee964acd7421fdd33017e2373366ff5f450d6d16b23311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cray, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 00:03:22 np0005480824 podman[301073]: 2025-10-11 04:03:22.055460664 +0000 UTC m=+0.043213735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:03:22 np0005480824 systemd[1]: Started libpod-conmon-9cab0a126458dd91cee964acd7421fdd33017e2373366ff5f450d6d16b23311f.scope.
Oct 11 00:03:22 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:03:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a72a0f9ad1d2e5bf602781c743a3f97107740849fe792c027045e0f32f23f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:03:22 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:22Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:72:de 10.100.0.6
Oct 11 00:03:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a72a0f9ad1d2e5bf602781c743a3f97107740849fe792c027045e0f32f23f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:03:22 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:22Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:72:de 10.100.0.6
Oct 11 00:03:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a72a0f9ad1d2e5bf602781c743a3f97107740849fe792c027045e0f32f23f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:03:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a72a0f9ad1d2e5bf602781c743a3f97107740849fe792c027045e0f32f23f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:03:22 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a72a0f9ad1d2e5bf602781c743a3f97107740849fe792c027045e0f32f23f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 00:03:22 np0005480824 podman[301073]: 2025-10-11 04:03:22.232513948 +0000 UTC m=+0.220266939 container init 9cab0a126458dd91cee964acd7421fdd33017e2373366ff5f450d6d16b23311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cray, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 00:03:22 np0005480824 podman[301073]: 2025-10-11 04:03:22.240176944 +0000 UTC m=+0.227929915 container start 9cab0a126458dd91cee964acd7421fdd33017e2373366ff5f450d6d16b23311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cray, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 00:03:22 np0005480824 podman[301073]: 2025-10-11 04:03:22.244049193 +0000 UTC m=+0.231802194 container attach 9cab0a126458dd91cee964acd7421fdd33017e2373366ff5f450d6d16b23311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:03:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 847 KiB/s rd, 44 KiB/s wr, 131 op/s
Oct 11 00:03:23 np0005480824 frosty_cray[301090]: --> passed data devices: 0 physical, 3 LVM
Oct 11 00:03:23 np0005480824 frosty_cray[301090]: --> relative data size: 1.0
Oct 11 00:03:23 np0005480824 frosty_cray[301090]: --> All data devices are unavailable
Oct 11 00:03:23 np0005480824 systemd[1]: libpod-9cab0a126458dd91cee964acd7421fdd33017e2373366ff5f450d6d16b23311f.scope: Deactivated successfully.
Oct 11 00:03:23 np0005480824 systemd[1]: libpod-9cab0a126458dd91cee964acd7421fdd33017e2373366ff5f450d6d16b23311f.scope: Consumed 1.034s CPU time.
Oct 11 00:03:23 np0005480824 podman[301073]: 2025-10-11 04:03:23.327230202 +0000 UTC m=+1.314983193 container died 9cab0a126458dd91cee964acd7421fdd33017e2373366ff5f450d6d16b23311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:03:23 np0005480824 systemd[1]: var-lib-containers-storage-overlay-b8a72a0f9ad1d2e5bf602781c743a3f97107740849fe792c027045e0f32f23f5-merged.mount: Deactivated successfully.
Oct 11 00:03:23 np0005480824 podman[301073]: 2025-10-11 04:03:23.393316223 +0000 UTC m=+1.381069204 container remove 9cab0a126458dd91cee964acd7421fdd33017e2373366ff5f450d6d16b23311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cray, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:03:23 np0005480824 systemd[1]: libpod-conmon-9cab0a126458dd91cee964acd7421fdd33017e2373366ff5f450d6d16b23311f.scope: Deactivated successfully.
Oct 11 00:03:24 np0005480824 podman[301272]: 2025-10-11 04:03:24.149587411 +0000 UTC m=+0.051224020 container create d0e2ecb4d634b2e06ee9acdc91b9bea095a1cb78983e1f08a70fa58b9b872c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mestorf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 00:03:24 np0005480824 systemd[1]: Started libpod-conmon-d0e2ecb4d634b2e06ee9acdc91b9bea095a1cb78983e1f08a70fa58b9b872c36.scope.
Oct 11 00:03:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e482 do_prune osdmap full prune enabled
Oct 11 00:03:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e483 e483: 3 total, 3 up, 3 in
Oct 11 00:03:24 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e483: 3 total, 3 up, 3 in
Oct 11 00:03:24 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:03:24 np0005480824 podman[301272]: 2025-10-11 04:03:24.21475448 +0000 UTC m=+0.116391119 container init d0e2ecb4d634b2e06ee9acdc91b9bea095a1cb78983e1f08a70fa58b9b872c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mestorf, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:03:24 np0005480824 podman[301272]: 2025-10-11 04:03:24.220610906 +0000 UTC m=+0.122247525 container start d0e2ecb4d634b2e06ee9acdc91b9bea095a1cb78983e1f08a70fa58b9b872c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mestorf, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 00:03:24 np0005480824 podman[301272]: 2025-10-11 04:03:24.129713674 +0000 UTC m=+0.031350313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:03:24 np0005480824 podman[301272]: 2025-10-11 04:03:24.223989213 +0000 UTC m=+0.125625842 container attach d0e2ecb4d634b2e06ee9acdc91b9bea095a1cb78983e1f08a70fa58b9b872c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:03:24 np0005480824 flamboyant_mestorf[301288]: 167 167
Oct 11 00:03:24 np0005480824 podman[301272]: 2025-10-11 04:03:24.22561647 +0000 UTC m=+0.127253089 container died d0e2ecb4d634b2e06ee9acdc91b9bea095a1cb78983e1f08a70fa58b9b872c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 11 00:03:24 np0005480824 systemd[1]: libpod-d0e2ecb4d634b2e06ee9acdc91b9bea095a1cb78983e1f08a70fa58b9b872c36.scope: Deactivated successfully.
Oct 11 00:03:24 np0005480824 systemd[1]: var-lib-containers-storage-overlay-eef43846d53b0e31e4b83572395b1a66db9df133aaa23ac0165f3dbcc9312e9e-merged.mount: Deactivated successfully.
Oct 11 00:03:24 np0005480824 podman[301272]: 2025-10-11 04:03:24.260976563 +0000 UTC m=+0.162613182 container remove d0e2ecb4d634b2e06ee9acdc91b9bea095a1cb78983e1f08a70fa58b9b872c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:03:24 np0005480824 systemd[1]: libpod-conmon-d0e2ecb4d634b2e06ee9acdc91b9bea095a1cb78983e1f08a70fa58b9b872c36.scope: Deactivated successfully.
Oct 11 00:03:24 np0005480824 podman[301310]: 2025-10-11 04:03:24.434358122 +0000 UTC m=+0.059988241 container create d7ea6eb16bcaf7ce692a87e33a148717c1427832be6cea684517ab777db6fef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lalande, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:03:24 np0005480824 systemd[1]: Started libpod-conmon-d7ea6eb16bcaf7ce692a87e33a148717c1427832be6cea684517ab777db6fef3.scope.
Oct 11 00:03:24 np0005480824 podman[301310]: 2025-10-11 04:03:24.415465488 +0000 UTC m=+0.041095637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:03:24 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:03:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92eb79d6cd76a51ba561f5ef6e432d6f700b3d60bf68206b9a815e2120338a9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:03:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92eb79d6cd76a51ba561f5ef6e432d6f700b3d60bf68206b9a815e2120338a9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:03:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92eb79d6cd76a51ba561f5ef6e432d6f700b3d60bf68206b9a815e2120338a9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:03:24 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92eb79d6cd76a51ba561f5ef6e432d6f700b3d60bf68206b9a815e2120338a9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:03:24 np0005480824 podman[301310]: 2025-10-11 04:03:24.549386979 +0000 UTC m=+0.175017188 container init d7ea6eb16bcaf7ce692a87e33a148717c1427832be6cea684517ab777db6fef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lalande, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 00:03:24 np0005480824 podman[301310]: 2025-10-11 04:03:24.562419599 +0000 UTC m=+0.188049758 container start d7ea6eb16bcaf7ce692a87e33a148717c1427832be6cea684517ab777db6fef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lalande, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 00:03:24 np0005480824 podman[301310]: 2025-10-11 04:03:24.567467405 +0000 UTC m=+0.193097614 container attach d7ea6eb16bcaf7ce692a87e33a148717c1427832be6cea684517ab777db6fef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 00:03:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:03:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1652885514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:03:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:03:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1652885514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:03:24 np0005480824 nova_compute[260089]: 2025-10-11 04:03:24.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 30 KiB/s wr, 59 op/s
Oct 11 00:03:25 np0005480824 nova_compute[260089]: 2025-10-11 04:03:25.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:25 np0005480824 brave_lalande[301327]: {
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:    "0": [
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:        {
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "devices": [
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "/dev/loop3"
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            ],
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_name": "ceph_lv0",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_size": "21470642176",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "name": "ceph_lv0",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "tags": {
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.cluster_name": "ceph",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.crush_device_class": "",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.encrypted": "0",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.osd_id": "0",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.type": "block",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.vdo": "0"
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            },
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "type": "block",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "vg_name": "ceph_vg0"
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:        }
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:    ],
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:    "1": [
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:        {
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "devices": [
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "/dev/loop4"
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            ],
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_name": "ceph_lv1",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_size": "21470642176",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "name": "ceph_lv1",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "tags": {
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.cluster_name": "ceph",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.crush_device_class": "",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.encrypted": "0",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.osd_id": "1",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.type": "block",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.vdo": "0"
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            },
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "type": "block",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "vg_name": "ceph_vg1"
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:        }
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:    ],
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:    "2": [
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:        {
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "devices": [
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "/dev/loop5"
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            ],
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_name": "ceph_lv2",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_size": "21470642176",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "name": "ceph_lv2",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "tags": {
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.cluster_name": "ceph",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.crush_device_class": "",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.encrypted": "0",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.osd_id": "2",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.type": "block",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:                "ceph.vdo": "0"
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            },
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "type": "block",
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:            "vg_name": "ceph_vg2"
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:        }
Oct 11 00:03:25 np0005480824 brave_lalande[301327]:    ]
Oct 11 00:03:25 np0005480824 brave_lalande[301327]: }
Oct 11 00:03:25 np0005480824 systemd[1]: libpod-d7ea6eb16bcaf7ce692a87e33a148717c1427832be6cea684517ab777db6fef3.scope: Deactivated successfully.
Oct 11 00:03:25 np0005480824 podman[301310]: 2025-10-11 04:03:25.450416728 +0000 UTC m=+1.076046847 container died d7ea6eb16bcaf7ce692a87e33a148717c1427832be6cea684517ab777db6fef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:03:25 np0005480824 systemd[1]: var-lib-containers-storage-overlay-92eb79d6cd76a51ba561f5ef6e432d6f700b3d60bf68206b9a815e2120338a9d-merged.mount: Deactivated successfully.
Oct 11 00:03:25 np0005480824 podman[301310]: 2025-10-11 04:03:25.517408039 +0000 UTC m=+1.143038168 container remove d7ea6eb16bcaf7ce692a87e33a148717c1427832be6cea684517ab777db6fef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lalande, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:03:25 np0005480824 systemd[1]: libpod-conmon-d7ea6eb16bcaf7ce692a87e33a148717c1427832be6cea684517ab777db6fef3.scope: Deactivated successfully.
Oct 11 00:03:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e483 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:03:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e483 do_prune osdmap full prune enabled
Oct 11 00:03:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e484 e484: 3 total, 3 up, 3 in
Oct 11 00:03:25 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e484: 3 total, 3 up, 3 in
Oct 11 00:03:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:03:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1720395943' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:03:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:03:25 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1720395943' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:03:26 np0005480824 podman[301491]: 2025-10-11 04:03:26.239245305 +0000 UTC m=+0.075757974 container create dc24f50f0bbb70dfbd738167335c7e95c08813a6b482b2ebcf0e97fb3cfdb9ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kapitsa, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:03:26 np0005480824 systemd[1]: Started libpod-conmon-dc24f50f0bbb70dfbd738167335c7e95c08813a6b482b2ebcf0e97fb3cfdb9ff.scope.
Oct 11 00:03:26 np0005480824 podman[301491]: 2025-10-11 04:03:26.20686263 +0000 UTC m=+0.043375349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:03:26 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:03:26 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 00:03:26 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 00:03:26 np0005480824 podman[301491]: 2025-10-11 04:03:26.318722354 +0000 UTC m=+0.155235023 container init dc24f50f0bbb70dfbd738167335c7e95c08813a6b482b2ebcf0e97fb3cfdb9ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kapitsa, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:03:26 np0005480824 podman[301491]: 2025-10-11 04:03:26.328295294 +0000 UTC m=+0.164807963 container start dc24f50f0bbb70dfbd738167335c7e95c08813a6b482b2ebcf0e97fb3cfdb9ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 00:03:26 np0005480824 podman[301491]: 2025-10-11 04:03:26.332345307 +0000 UTC m=+0.168857986 container attach dc24f50f0bbb70dfbd738167335c7e95c08813a6b482b2ebcf0e97fb3cfdb9ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:03:26 np0005480824 focused_kapitsa[301508]: 167 167
Oct 11 00:03:26 np0005480824 podman[301491]: 2025-10-11 04:03:26.335409478 +0000 UTC m=+0.171922147 container died dc24f50f0bbb70dfbd738167335c7e95c08813a6b482b2ebcf0e97fb3cfdb9ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 00:03:26 np0005480824 systemd[1]: libpod-dc24f50f0bbb70dfbd738167335c7e95c08813a6b482b2ebcf0e97fb3cfdb9ff.scope: Deactivated successfully.
Oct 11 00:03:26 np0005480824 podman[301505]: 2025-10-11 04:03:26.359942842 +0000 UTC m=+0.079253304 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 11 00:03:26 np0005480824 systemd[1]: var-lib-containers-storage-overlay-1667614a71949bfbabd9876d63bcc9118e9c05a2fdd289f9105a858b751612c1-merged.mount: Deactivated successfully.
Oct 11 00:03:26 np0005480824 podman[301491]: 2025-10-11 04:03:26.390635398 +0000 UTC m=+0.227148067 container remove dc24f50f0bbb70dfbd738167335c7e95c08813a6b482b2ebcf0e97fb3cfdb9ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kapitsa, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:03:26 np0005480824 systemd[1]: libpod-conmon-dc24f50f0bbb70dfbd738167335c7e95c08813a6b482b2ebcf0e97fb3cfdb9ff.scope: Deactivated successfully.
Oct 11 00:03:26 np0005480824 podman[301549]: 2025-10-11 04:03:26.596921634 +0000 UTC m=+0.049228813 container create 938477fe5170f60ff00aa6bcc31427baae162599df33a45446b51f8d6bc3839f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:03:26 np0005480824 systemd[1]: Started libpod-conmon-938477fe5170f60ff00aa6bcc31427baae162599df33a45446b51f8d6bc3839f.scope.
Oct 11 00:03:26 np0005480824 podman[301549]: 2025-10-11 04:03:26.576181347 +0000 UTC m=+0.028488566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:03:26 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:03:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/571767dc9c0ed120de09e558027f4934afdd12849823195ff9f6ecfe6780e0a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:03:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/571767dc9c0ed120de09e558027f4934afdd12849823195ff9f6ecfe6780e0a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:03:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/571767dc9c0ed120de09e558027f4934afdd12849823195ff9f6ecfe6780e0a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:03:26 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/571767dc9c0ed120de09e558027f4934afdd12849823195ff9f6ecfe6780e0a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:03:26 np0005480824 podman[301549]: 2025-10-11 04:03:26.687788255 +0000 UTC m=+0.140095444 container init 938477fe5170f60ff00aa6bcc31427baae162599df33a45446b51f8d6bc3839f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cohen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:03:26 np0005480824 podman[301549]: 2025-10-11 04:03:26.695312018 +0000 UTC m=+0.147619187 container start 938477fe5170f60ff00aa6bcc31427baae162599df33a45446b51f8d6bc3839f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cohen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:03:26 np0005480824 podman[301549]: 2025-10-11 04:03:26.697930418 +0000 UTC m=+0.150237607 container attach 938477fe5170f60ff00aa6bcc31427baae162599df33a45446b51f8d6bc3839f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 00:03:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 31 KiB/s wr, 63 op/s
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]: {
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "osd_id": 0,
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "type": "bluestore"
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:    },
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "osd_id": 1,
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "type": "bluestore"
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:    },
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "osd_id": 2,
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:        "type": "bluestore"
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]:    }
Oct 11 00:03:27 np0005480824 flamboyant_cohen[301566]: }
Oct 11 00:03:27 np0005480824 systemd[1]: libpod-938477fe5170f60ff00aa6bcc31427baae162599df33a45446b51f8d6bc3839f.scope: Deactivated successfully.
Oct 11 00:03:27 np0005480824 systemd[1]: libpod-938477fe5170f60ff00aa6bcc31427baae162599df33a45446b51f8d6bc3839f.scope: Consumed 1.043s CPU time.
Oct 11 00:03:27 np0005480824 podman[301549]: 2025-10-11 04:03:27.736077571 +0000 UTC m=+1.188384780 container died 938477fe5170f60ff00aa6bcc31427baae162599df33a45446b51f8d6bc3839f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:03:27 np0005480824 systemd[1]: var-lib-containers-storage-overlay-571767dc9c0ed120de09e558027f4934afdd12849823195ff9f6ecfe6780e0a0-merged.mount: Deactivated successfully.
Oct 11 00:03:27 np0005480824 podman[301549]: 2025-10-11 04:03:27.812875018 +0000 UTC m=+1.265182227 container remove 938477fe5170f60ff00aa6bcc31427baae162599df33a45446b51f8d6bc3839f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cohen, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 00:03:27 np0005480824 systemd[1]: libpod-conmon-938477fe5170f60ff00aa6bcc31427baae162599df33a45446b51f8d6bc3839f.scope: Deactivated successfully.
Oct 11 00:03:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 00:03:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:03:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 00:03:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:03:27 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev a988751f-32fc-4901-bf10-8f00fa5ac69b does not exist
Oct 11 00:03:27 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 1a2bfbff-477b-49ac-b1fd-9765ba46956f does not exist
Oct 11 00:03:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:03:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:03:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:03:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:03:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:03:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:03:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_04:03:27
Oct 11 00:03:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 00:03:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 11 00:03:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'volumes', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.data']
Oct 11 00:03:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 11 00:03:28 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:03:28 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:03:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 00:03:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:03:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 00:03:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:03:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:03:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:03:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:03:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:03:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:03:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:03:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 32 KiB/s wr, 58 op/s
Oct 11 00:03:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e484 do_prune osdmap full prune enabled
Oct 11 00:03:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e485 e485: 3 total, 3 up, 3 in
Oct 11 00:03:29 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e485: 3 total, 3 up, 3 in
Oct 11 00:03:29 np0005480824 nova_compute[260089]: 2025-10-11 04:03:29.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:30 np0005480824 nova_compute[260089]: 2025-10-11 04:03:30.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e485 do_prune osdmap full prune enabled
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e486 e486: 3 total, 3 up, 3 in
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e486: 3 total, 3 up, 3 in
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.756840) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155410756893, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1870, "num_deletes": 275, "total_data_size": 2529023, "memory_usage": 2577744, "flush_reason": "Manual Compaction"}
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155410774422, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2471881, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34573, "largest_seqno": 36442, "table_properties": {"data_size": 2462895, "index_size": 5608, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19071, "raw_average_key_size": 21, "raw_value_size": 2444899, "raw_average_value_size": 2716, "num_data_blocks": 244, "num_entries": 900, "num_filter_entries": 900, "num_deletions": 275, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155301, "oldest_key_time": 1760155301, "file_creation_time": 1760155410, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 17627 microseconds, and 10218 cpu microseconds.
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.774473) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2471881 bytes OK
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.774495) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.776891) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.776908) EVENT_LOG_v1 {"time_micros": 1760155410776903, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.776930) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2520626, prev total WAL file size 2520626, number of live WAL files 2.
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.778235) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(2413KB)], [71(9048KB)]
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155410778275, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11737364, "oldest_snapshot_seqno": -1}
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6806 keys, 11582091 bytes, temperature: kUnknown
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155410876352, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 11582091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11529102, "index_size": 34893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 171420, "raw_average_key_size": 25, "raw_value_size": 11399344, "raw_average_value_size": 1674, "num_data_blocks": 1404, "num_entries": 6806, "num_filter_entries": 6806, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760155410, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.876745) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 11582091 bytes
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.878484) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.5 rd, 117.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.8 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(9.4) write-amplify(4.7) OK, records in: 7354, records dropped: 548 output_compression: NoCompression
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.878505) EVENT_LOG_v1 {"time_micros": 1760155410878494, "job": 40, "event": "compaction_finished", "compaction_time_micros": 98211, "compaction_time_cpu_micros": 38281, "output_level": 6, "num_output_files": 1, "total_output_size": 11582091, "num_input_records": 7354, "num_output_records": 6806, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155410879433, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155410881838, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.778119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.882037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.882045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.882049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.882052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:03:30 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:30.882056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:03:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 8.5 KiB/s wr, 46 op/s
Oct 11 00:03:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e486 do_prune osdmap full prune enabled
Oct 11 00:03:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e487 e487: 3 total, 3 up, 3 in
Oct 11 00:03:31 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e487: 3 total, 3 up, 3 in
Oct 11 00:03:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 31 KiB/s wr, 78 op/s
Oct 11 00:03:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:03:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3729448400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:03:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:03:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3729448400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:03:34 np0005480824 nova_compute[260089]: 2025-10-11 04:03:34.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 23 KiB/s wr, 34 op/s
Oct 11 00:03:35 np0005480824 nova_compute[260089]: 2025-10-11 04:03:35.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e487 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:03:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 18 KiB/s wr, 43 op/s
Oct 11 00:03:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e487 do_prune osdmap full prune enabled
Oct 11 00:03:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e488 e488: 3 total, 3 up, 3 in
Oct 11 00:03:37 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e488: 3 total, 3 up, 3 in
Oct 11 00:03:37 np0005480824 nova_compute[260089]: 2025-10-11 04:03:37.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002890197670443446 of space, bias 1.0, pg target 0.8670593011330339 quantized to 32 (current 32)
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:03:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 00:03:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 19 KiB/s wr, 57 op/s
Oct 11 00:03:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e488 do_prune osdmap full prune enabled
Oct 11 00:03:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e489 e489: 3 total, 3 up, 3 in
Oct 11 00:03:39 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e489: 3 total, 3 up, 3 in
Oct 11 00:03:39 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:39Z|00252|memory_trim|INFO|Detected inactivity (last active 30020 ms ago): trimming memory
Oct 11 00:03:39 np0005480824 nova_compute[260089]: 2025-10-11 04:03:39.293 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:03:39 np0005480824 nova_compute[260089]: 2025-10-11 04:03:39.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:03:39 np0005480824 nova_compute[260089]: 2025-10-11 04:03:39.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:03:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1103559339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:03:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:03:40 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1103559339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:03:40 np0005480824 nova_compute[260089]: 2025-10-11 04:03:40.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e489 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:03:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e489 do_prune osdmap full prune enabled
Oct 11 00:03:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e490 e490: 3 total, 3 up, 3 in
Oct 11 00:03:40 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e490: 3 total, 3 up, 3 in
Oct 11 00:03:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Oct 11 00:03:41 np0005480824 nova_compute[260089]: 2025-10-11 04:03:41.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:03:41 np0005480824 nova_compute[260089]: 2025-10-11 04:03:41.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:03:41 np0005480824 nova_compute[260089]: 2025-10-11 04:03:41.335 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:03:41 np0005480824 nova_compute[260089]: 2025-10-11 04:03:41.336 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:03:41 np0005480824 nova_compute[260089]: 2025-10-11 04:03:41.336 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:03:41 np0005480824 nova_compute[260089]: 2025-10-11 04:03:41.336 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 00:03:41 np0005480824 nova_compute[260089]: 2025-10-11 04:03:41.337 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:03:41 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:03:41 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/497018608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:03:41 np0005480824 nova_compute[260089]: 2025-10-11 04:03:41.804 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:03:41 np0005480824 nova_compute[260089]: 2025-10-11 04:03:41.886 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 00:03:41 np0005480824 nova_compute[260089]: 2025-10-11 04:03:41.887 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 00:03:41 np0005480824 podman[301687]: 2025-10-11 04:03:41.95116624 +0000 UTC m=+0.074254519 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 00:03:41 np0005480824 podman[301685]: 2025-10-11 04:03:41.985740476 +0000 UTC m=+0.116804519 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.012 2 DEBUG oslo_concurrency.lockutils [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "1364751a-4bbf-49e1-abe3-f702f03be8e3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.012 2 DEBUG oslo_concurrency.lockutils [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.012 2 DEBUG oslo_concurrency.lockutils [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.013 2 DEBUG oslo_concurrency.lockutils [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.013 2 DEBUG oslo_concurrency.lockutils [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.014 2 INFO nova.compute.manager [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Terminating instance#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.015 2 DEBUG nova.compute.manager [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 00:03:42 np0005480824 kernel: tape1ac33cf-47 (unregistering): left promiscuous mode
Oct 11 00:03:42 np0005480824 NetworkManager[44969]: <info>  [1760155422.0622] device (tape1ac33cf-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 00:03:42 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:42Z|00253|binding|INFO|Releasing lport e1ac33cf-472c-41ba-b3ed-459749e87ead from this chassis (sb_readonly=0)
Oct 11 00:03:42 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:42Z|00254|binding|INFO|Setting lport e1ac33cf-472c-41ba-b3ed-459749e87ead down in Southbound
Oct 11 00:03:42 np0005480824 ovn_controller[152667]: 2025-10-11T04:03:42Z|00255|binding|INFO|Removing iface tape1ac33cf-47 ovn-installed in OVS
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.146 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:72:de 10.100.0.6'], port_security=['fa:16:3e:da:72:de 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1364751a-4bbf-49e1-abe3-f702f03be8e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e73ded2f2ee46b4a7485c01ef1b73e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0a0daf4-5fac-406b-b8da-5df24a392041', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f608fb9-f693-4a11-9617-6172f3d025df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=e1ac33cf-472c-41ba-b3ed-459749e87ead) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.148 162245 INFO neutron.agent.ovn.metadata.agent [-] Port e1ac33cf-472c-41ba-b3ed-459749e87ead in datapath 15a62ee0-8e34-4e49-990e-246b4ef9e0c6 unbound from our chassis#033[00m
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.151 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15a62ee0-8e34-4e49-990e-246b4ef9e0c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.152 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[78973c11-a782-42b6-9e51-74a7297dad3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.154 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 namespace which is not needed anymore#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.172 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:03:42 np0005480824 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct 11 00:03:42 np0005480824 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Consumed 16.524s CPU time.
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.174 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4115MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.174 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.175 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:03:42 np0005480824 systemd-machined[215071]: Machine qemu-27-instance-0000001b terminated.
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.252 2 INFO nova.virt.libvirt.driver [-] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Instance destroyed successfully.#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.253 2 DEBUG nova.objects.instance [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lazy-loading 'resources' on Instance uuid 1364751a-4bbf-49e1-abe3-f702f03be8e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.354 2 DEBUG nova.virt.libvirt.vif [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1319088656',display_name='tempest-TransferEncryptedVolumeTest-server-1319088656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1319088656',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDBxc/yNqNR+6hcns3uIK5nByp3y7/Z4QylmLciPhq6XKUnS3cE8WBipiedmC1KXbIrzQin+vEhjglj/GGa46YEcBJkij9tDpZ0nSurHoQgQFYWBhIhwD65l+TbXzKNAAg==',key_name='tempest-TransferEncryptedVolumeTest-1457663695',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:03:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0e73ded2f2ee46b4a7485c01ef1b73e9',ramdisk_id='',reservation_id='r-atzkzh09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1815435088',owner_user_name='tempest-TransferEncryptedVolumeTest-1815435088-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:03:04Z,user_data=None,user_id='eccc3f574d354840901d28dad2488bf4',uuid=1364751a-4bbf-49e1-abe3-f702f03be8e3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "address": "fa:16:3e:da:72:de", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1ac33cf-47", "ovs_interfaceid": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.356 2 DEBUG nova.network.os_vif_util [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converting VIF {"id": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "address": "fa:16:3e:da:72:de", "network": {"id": "15a62ee0-8e34-4e49-990e-246b4ef9e0c6", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1498494916-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0e73ded2f2ee46b4a7485c01ef1b73e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1ac33cf-47", "ovs_interfaceid": "e1ac33cf-472c-41ba-b3ed-459749e87ead", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.357 2 DEBUG nova.network.os_vif_util [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:72:de,bridge_name='br-int',has_traffic_filtering=True,id=e1ac33cf-472c-41ba-b3ed-459749e87ead,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1ac33cf-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.357 2 DEBUG os_vif [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:72:de,bridge_name='br-int',has_traffic_filtering=True,id=e1ac33cf-472c-41ba-b3ed-459749e87ead,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1ac33cf-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.359 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1ac33cf-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.370 2 INFO os_vif [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:72:de,bridge_name='br-int',has_traffic_filtering=True,id=e1ac33cf-472c-41ba-b3ed-459749e87ead,network=Network(15a62ee0-8e34-4e49-990e-246b4ef9e0c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1ac33cf-47')#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.396 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 1364751a-4bbf-49e1-abe3-f702f03be8e3 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.396 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.396 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.434 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:03:42 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[300342]: [NOTICE]   (300346) : haproxy version is 2.8.14-c23fe91
Oct 11 00:03:42 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[300342]: [NOTICE]   (300346) : path to executable is /usr/sbin/haproxy
Oct 11 00:03:42 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[300342]: [WARNING]  (300346) : Exiting Master process...
Oct 11 00:03:42 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[300342]: [ALERT]    (300346) : Current worker (300348) exited with code 143 (Terminated)
Oct 11 00:03:42 np0005480824 neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6[300342]: [WARNING]  (300346) : All workers exited. Exiting... (0)
Oct 11 00:03:42 np0005480824 systemd[1]: libpod-3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa.scope: Deactivated successfully.
Oct 11 00:03:42 np0005480824 conmon[300342]: conmon 3af8ab12b52647b9eed6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa.scope/container/memory.events
Oct 11 00:03:42 np0005480824 podman[301751]: 2025-10-11 04:03:42.453891686 +0000 UTC m=+0.197580917 container died 3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.601 2 DEBUG nova.compute.manager [req-4bc5637a-c328-44c8-a07c-e6b84ae719e1 req-1c10c68d-1c7c-48d5-b493-bfe474d37735 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Received event network-vif-unplugged-e1ac33cf-472c-41ba-b3ed-459749e87ead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.603 2 DEBUG oslo_concurrency.lockutils [req-4bc5637a-c328-44c8-a07c-e6b84ae719e1 req-1c10c68d-1c7c-48d5-b493-bfe474d37735 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.604 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.605 2 DEBUG oslo_concurrency.lockutils [req-4bc5637a-c328-44c8-a07c-e6b84ae719e1 req-1c10c68d-1c7c-48d5-b493-bfe474d37735 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.605 2 DEBUG oslo_concurrency.lockutils [req-4bc5637a-c328-44c8-a07c-e6b84ae719e1 req-1c10c68d-1c7c-48d5-b493-bfe474d37735 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.605 2 DEBUG nova.compute.manager [req-4bc5637a-c328-44c8-a07c-e6b84ae719e1 req-1c10c68d-1c7c-48d5-b493-bfe474d37735 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] No waiting events found dispatching network-vif-unplugged-e1ac33cf-472c-41ba-b3ed-459749e87ead pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.605 2 DEBUG nova.compute.manager [req-4bc5637a-c328-44c8-a07c-e6b84ae719e1 req-1c10c68d-1c7c-48d5-b493-bfe474d37735 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Received event network-vif-unplugged-e1ac33cf-472c-41ba-b3ed-459749e87ead for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:42 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa-userdata-shm.mount: Deactivated successfully.
Oct 11 00:03:42 np0005480824 systemd[1]: var-lib-containers-storage-overlay-86fd47f0de7e18fc6da552c8f519eb3296e83d9ed08b6ff49114aab0045b397c-merged.mount: Deactivated successfully.
Oct 11 00:03:42 np0005480824 podman[301751]: 2025-10-11 04:03:42.758559425 +0000 UTC m=+0.502248676 container cleanup 3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 11 00:03:42 np0005480824 podman[301826]: 2025-10-11 04:03:42.869330804 +0000 UTC m=+0.069715115 container remove 3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.882 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[941737b8-4f5b-4962-a8b5-dbcfa001c0cf]: (4, ('Sat Oct 11 04:03:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 (3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa)\n3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa\nSat Oct 11 04:03:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 (3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa)\n3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:42 np0005480824 systemd[1]: libpod-conmon-3af8ab12b52647b9eed613e64ac34f1de64efd24c16060996adc64b80669b7fa.scope: Deactivated successfully.
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.886 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1619eb-8046-4118-bcf0-f8fcab136064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.887 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15a62ee0-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:42 np0005480824 kernel: tap15a62ee0-80: left promiscuous mode
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.921 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[837b8aaa-768a-47b9-9146-af120aade489]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.930 2 INFO nova.virt.libvirt.driver [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Deleting instance files /var/lib/nova/instances/1364751a-4bbf-49e1-abe3-f702f03be8e3_del#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.932 2 INFO nova.virt.libvirt.driver [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Deletion of /var/lib/nova/instances/1364751a-4bbf-49e1-abe3-f702f03be8e3_del complete#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:42 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:03:42 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/796519187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.949 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[a815deeb-0d69-4b5e-9d95-eb9668c1f132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.951 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[beffb576-a2a5-4d26-b779-a89e9b9f7eac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.967 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[62df05f4-00b7-4cf7-98bf-51dfd4da7f12]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484886, 'reachable_time': 24743, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301845, 'error': None, 'target': 'ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.970 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.972 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-15a62ee0-8e34-4e49-990e-246b4ef9e0c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.972 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[730d996c-fdb8-4cd2-bdd9-7656d1ea28d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:03:42 np0005480824 systemd[1]: run-netns-ovnmeta\x2d15a62ee0\x2d8e34\x2d4e49\x2d990e\x2d246b4ef9e0c6.mount: Deactivated successfully.
Oct 11 00:03:42 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:42.973 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 00:03:42 np0005480824 nova_compute[260089]: 2025-10-11 04:03:42.979 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:03:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 2.8 KiB/s wr, 83 op/s
Oct 11 00:03:43 np0005480824 nova_compute[260089]: 2025-10-11 04:03:43.079 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:03:43 np0005480824 nova_compute[260089]: 2025-10-11 04:03:43.085 2 INFO nova.compute.manager [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Took 1.07 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 00:03:43 np0005480824 nova_compute[260089]: 2025-10-11 04:03:43.086 2 DEBUG oslo.service.loopingcall [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 00:03:43 np0005480824 nova_compute[260089]: 2025-10-11 04:03:43.086 2 DEBUG nova.compute.manager [-] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 00:03:43 np0005480824 nova_compute[260089]: 2025-10-11 04:03:43.086 2 DEBUG nova.network.neutron [-] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 00:03:43 np0005480824 nova_compute[260089]: 2025-10-11 04:03:43.140 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 00:03:43 np0005480824 nova_compute[260089]: 2025-10-11 04:03:43.140 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:03:44 np0005480824 nova_compute[260089]: 2025-10-11 04:03:44.384 2 DEBUG nova.network.neutron [-] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:03:44 np0005480824 nova_compute[260089]: 2025-10-11 04:03:44.407 2 INFO nova.compute.manager [-] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Took 1.32 seconds to deallocate network for instance.#033[00m
Oct 11 00:03:44 np0005480824 nova_compute[260089]: 2025-10-11 04:03:44.714 2 DEBUG nova.compute.manager [req-540062d8-e8a5-4e87-a765-371584ab100d req-1b1ab8b1-8f93-4a97-94c8-45f1a75e5918 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Received event network-vif-plugged-e1ac33cf-472c-41ba-b3ed-459749e87ead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:03:44 np0005480824 nova_compute[260089]: 2025-10-11 04:03:44.715 2 DEBUG oslo_concurrency.lockutils [req-540062d8-e8a5-4e87-a765-371584ab100d req-1b1ab8b1-8f93-4a97-94c8-45f1a75e5918 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:03:44 np0005480824 nova_compute[260089]: 2025-10-11 04:03:44.715 2 DEBUG oslo_concurrency.lockutils [req-540062d8-e8a5-4e87-a765-371584ab100d req-1b1ab8b1-8f93-4a97-94c8-45f1a75e5918 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:03:44 np0005480824 nova_compute[260089]: 2025-10-11 04:03:44.715 2 DEBUG oslo_concurrency.lockutils [req-540062d8-e8a5-4e87-a765-371584ab100d req-1b1ab8b1-8f93-4a97-94c8-45f1a75e5918 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:03:44 np0005480824 nova_compute[260089]: 2025-10-11 04:03:44.715 2 DEBUG nova.compute.manager [req-540062d8-e8a5-4e87-a765-371584ab100d req-1b1ab8b1-8f93-4a97-94c8-45f1a75e5918 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] No waiting events found dispatching network-vif-plugged-e1ac33cf-472c-41ba-b3ed-459749e87ead pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:03:44 np0005480824 nova_compute[260089]: 2025-10-11 04:03:44.716 2 WARNING nova.compute.manager [req-540062d8-e8a5-4e87-a765-371584ab100d req-1b1ab8b1-8f93-4a97-94c8-45f1a75e5918 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Received unexpected event network-vif-plugged-e1ac33cf-472c-41ba-b3ed-459749e87ead for instance with vm_state active and task_state deleting.#033[00m
Oct 11 00:03:44 np0005480824 nova_compute[260089]: 2025-10-11 04:03:44.716 2 DEBUG nova.compute.manager [req-540062d8-e8a5-4e87-a765-371584ab100d req-1b1ab8b1-8f93-4a97-94c8-45f1a75e5918 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Received event network-vif-deleted-e1ac33cf-472c-41ba-b3ed-459749e87ead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:03:44 np0005480824 nova_compute[260089]: 2025-10-11 04:03:44.738 2 INFO nova.compute.manager [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Took 0.33 seconds to detach 1 volumes for instance.#033[00m
Oct 11 00:03:44 np0005480824 nova_compute[260089]: 2025-10-11 04:03:44.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:44 np0005480824 nova_compute[260089]: 2025-10-11 04:03:44.798 2 DEBUG oslo_concurrency.lockutils [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:03:44 np0005480824 nova_compute[260089]: 2025-10-11 04:03:44.798 2 DEBUG oslo_concurrency.lockutils [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:03:44 np0005480824 nova_compute[260089]: 2025-10-11 04:03:44.836 2 DEBUG oslo_concurrency.processutils [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:03:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 KiB/s wr, 63 op/s
Oct 11 00:03:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:03:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3988885608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:03:45 np0005480824 nova_compute[260089]: 2025-10-11 04:03:45.285 2 DEBUG oslo_concurrency.processutils [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:03:45 np0005480824 nova_compute[260089]: 2025-10-11 04:03:45.294 2 DEBUG nova.compute.provider_tree [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:03:45 np0005480824 nova_compute[260089]: 2025-10-11 04:03:45.314 2 DEBUG nova.scheduler.client.report [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:03:45 np0005480824 nova_compute[260089]: 2025-10-11 04:03:45.346 2 DEBUG oslo_concurrency.lockutils [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:03:45 np0005480824 nova_compute[260089]: 2025-10-11 04:03:45.379 2 INFO nova.scheduler.client.report [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Deleted allocations for instance 1364751a-4bbf-49e1-abe3-f702f03be8e3#033[00m
Oct 11 00:03:45 np0005480824 nova_compute[260089]: 2025-10-11 04:03:45.482 2 DEBUG oslo_concurrency.lockutils [None req-afd167d3-bbc4-4425-9dfe-45af714cdb85 eccc3f574d354840901d28dad2488bf4 0e73ded2f2ee46b4a7485c01ef1b73e9 - - default default] Lock "1364751a-4bbf-49e1-abe3-f702f03be8e3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:03:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:03:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e490 do_prune osdmap full prune enabled
Oct 11 00:03:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e491 e491: 3 total, 3 up, 3 in
Oct 11 00:03:45 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e491: 3 total, 3 up, 3 in
Oct 11 00:03:46 np0005480824 nova_compute[260089]: 2025-10-11 04:03:46.141 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:03:46 np0005480824 nova_compute[260089]: 2025-10-11 04:03:46.142 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 00:03:46 np0005480824 nova_compute[260089]: 2025-10-11 04:03:46.142 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 00:03:46 np0005480824 nova_compute[260089]: 2025-10-11 04:03:46.183 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 00:03:46 np0005480824 nova_compute[260089]: 2025-10-11 04:03:46.184 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:03:46 np0005480824 nova_compute[260089]: 2025-10-11 04:03:46.185 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:03:46 np0005480824 nova_compute[260089]: 2025-10-11 04:03:46.185 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 00:03:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 270 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 1.3 KiB/s wr, 49 op/s
Oct 11 00:03:47 np0005480824 nova_compute[260089]: 2025-10-11 04:03:47.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:03:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1044218415' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:03:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:03:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1044218415' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:03:48 np0005480824 nova_compute[260089]: 2025-10-11 04:03:48.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:03:48 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:03:48.975 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:03:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 242 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 KiB/s wr, 60 op/s
Oct 11 00:03:49 np0005480824 nova_compute[260089]: 2025-10-11 04:03:49.293 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:03:49 np0005480824 nova_compute[260089]: 2025-10-11 04:03:49.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:50 np0005480824 podman[301868]: 2025-10-11 04:03:50.057653566 +0000 UTC m=+0.113128373 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 00:03:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e491 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:03:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 242 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 284 KiB/s rd, 1.7 KiB/s wr, 50 op/s
Oct 11 00:03:52 np0005480824 nova_compute[260089]: 2025-10-11 04:03:52.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:52 np0005480824 nova_compute[260089]: 2025-10-11 04:03:52.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:52 np0005480824 nova_compute[260089]: 2025-10-11 04:03:52.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1814: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 34 op/s
Oct 11 00:03:54 np0005480824 nova_compute[260089]: 2025-10-11 04:03:54.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 34 op/s
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.294630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155435294658, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 522, "num_deletes": 253, "total_data_size": 465235, "memory_usage": 474872, "flush_reason": "Manual Compaction"}
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155435298926, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 460110, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36443, "largest_seqno": 36964, "table_properties": {"data_size": 457131, "index_size": 951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7123, "raw_average_key_size": 19, "raw_value_size": 451195, "raw_average_value_size": 1232, "num_data_blocks": 42, "num_entries": 366, "num_filter_entries": 366, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155411, "oldest_key_time": 1760155411, "file_creation_time": 1760155435, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 4332 microseconds, and 1517 cpu microseconds.
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.298960) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 460110 bytes OK
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.298975) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.301685) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.301712) EVENT_LOG_v1 {"time_micros": 1760155435301703, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.301737) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 462204, prev total WAL file size 462204, number of live WAL files 2.
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.302315) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(449KB)], [74(11MB)]
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155435302385, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12042201, "oldest_snapshot_seqno": -1}
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6656 keys, 10288937 bytes, temperature: kUnknown
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155435373753, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10288937, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10238578, "index_size": 32647, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 169075, "raw_average_key_size": 25, "raw_value_size": 10113022, "raw_average_value_size": 1519, "num_data_blocks": 1299, "num_entries": 6656, "num_filter_entries": 6656, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760155435, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.374078) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10288937 bytes
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.375761) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.5 rd, 144.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 11.0 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(48.5) write-amplify(22.4) OK, records in: 7172, records dropped: 516 output_compression: NoCompression
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.375791) EVENT_LOG_v1 {"time_micros": 1760155435375778, "job": 42, "event": "compaction_finished", "compaction_time_micros": 71456, "compaction_time_cpu_micros": 31811, "output_level": 6, "num_output_files": 1, "total_output_size": 10288937, "num_input_records": 7172, "num_output_records": 6656, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155435376099, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155435380238, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.302204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.380352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.380358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.380360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.380361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:03:55.380363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:03:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e491 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:03:56 np0005480824 podman[301897]: 2025-10-11 04:03:56.992236312 +0000 UTC m=+0.050889642 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:03:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Oct 11 00:03:57 np0005480824 nova_compute[260089]: 2025-10-11 04:03:57.251 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155422.2499707, 1364751a-4bbf-49e1-abe3-f702f03be8e3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:03:57 np0005480824 nova_compute[260089]: 2025-10-11 04:03:57.251 2 INFO nova.compute.manager [-] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] VM Stopped (Lifecycle Event)#033[00m
Oct 11 00:03:57 np0005480824 nova_compute[260089]: 2025-10-11 04:03:57.276 2 DEBUG nova.compute.manager [None req-0c71f0d4-2083-4551-aa4a-8acfef177b06 - - - - - -] [instance: 1364751a-4bbf-49e1-abe3-f702f03be8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:03:57 np0005480824 nova_compute[260089]: 2025-10-11 04:03:57.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:03:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:03:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:03:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:03:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:03:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:03:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:03:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:03:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3098096241' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:03:58 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:03:58 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3098096241' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:03:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 28 op/s
Oct 11 00:03:59 np0005480824 nova_compute[260089]: 2025-10-11 04:03:59.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e491 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:04:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 938 B/s wr, 19 op/s
Oct 11 00:04:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:04:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/121806110' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:04:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:04:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/121806110' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:04:02 np0005480824 nova_compute[260089]: 2025-10-11 04:04:02.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 47 op/s
Oct 11 00:04:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:04:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2776368634' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:04:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:04:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2776368634' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:04:04 np0005480824 nova_compute[260089]: 2025-10-11 04:04:04.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 00:04:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e491 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:04:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.8 KiB/s wr, 44 op/s
Oct 11 00:04:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:04:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1924452283' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:04:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:04:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1924452283' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:04:07 np0005480824 nova_compute[260089]: 2025-10-11 04:04:07.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.1 KiB/s wr, 49 op/s
Oct 11 00:04:09 np0005480824 nova_compute[260089]: 2025-10-11 04:04:09.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:04:10.509 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:04:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:04:10.510 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:04:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:04:10.510 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:04:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e491 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:04:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.7 KiB/s wr, 48 op/s
Oct 11 00:04:12 np0005480824 nova_compute[260089]: 2025-10-11 04:04:12.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:13 np0005480824 podman[301918]: 2025-10-11 04:04:13.025897235 +0000 UTC m=+0.048702590 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd)
Oct 11 00:04:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.0 KiB/s wr, 56 op/s
Oct 11 00:04:13 np0005480824 podman[301919]: 2025-10-11 04:04:13.052528974 +0000 UTC m=+0.062287920 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:04:14 np0005480824 nova_compute[260089]: 2025-10-11 04:04:14.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 00:04:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e491 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:04:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 00:04:17 np0005480824 nova_compute[260089]: 2025-10-11 04:04:17.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 511 B/s wr, 13 op/s
Oct 11 00:04:19 np0005480824 nova_compute[260089]: 2025-10-11 04:04:19.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e491 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:04:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 255 B/s wr, 8 op/s
Oct 11 00:04:21 np0005480824 podman[301957]: 2025-10-11 04:04:21.081448748 +0000 UTC m=+0.137271299 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 00:04:22 np0005480824 nova_compute[260089]: 2025-10-11 04:04:22.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 255 B/s wr, 8 op/s
Oct 11 00:04:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:04:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4109272249' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:04:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:04:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4109272249' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:04:24 np0005480824 nova_compute[260089]: 2025-10-11 04:04:24.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:04:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e491 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:04:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:04:27 np0005480824 nova_compute[260089]: 2025-10-11 04:04:27.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:04:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:04:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:04:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:04:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:04:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:04:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_04:04:27
Oct 11 00:04:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 00:04:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 11 00:04:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta']
Oct 11 00:04:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 11 00:04:28 np0005480824 podman[301986]: 2025-10-11 04:04:28.018504708 +0000 UTC m=+0.065834684 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 00:04:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 00:04:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:04:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 00:04:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:04:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:04:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:04:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:04:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:04:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:04:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:04:28 np0005480824 ceph-mgr[74617]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3841581780
Oct 11 00:04:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:04:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:04:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 00:04:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:04:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 00:04:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:04:28 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 51da7979-98f6-49e9-85dc-3d63393c8cca does not exist
Oct 11 00:04:28 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 0fa8eeab-5063-4b08-9b67-b458f04fc96f does not exist
Oct 11 00:04:28 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev c955ca84-114a-4a20-8257-98bd2805ccc5 does not exist
Oct 11 00:04:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 00:04:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 00:04:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 00:04:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:04:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:04:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:04:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:04:29 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:04:29 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:04:29 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:04:29 np0005480824 podman[302279]: 2025-10-11 04:04:29.812330833 +0000 UTC m=+0.055811088 container create 95630c9027bdaae502284a808f5663af0107d0bc6e21d6f0ae1d53c754dfc3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 00:04:29 np0005480824 nova_compute[260089]: 2025-10-11 04:04:29.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:29 np0005480824 systemd[1]: Started libpod-conmon-95630c9027bdaae502284a808f5663af0107d0bc6e21d6f0ae1d53c754dfc3f7.scope.
Oct 11 00:04:29 np0005480824 podman[302279]: 2025-10-11 04:04:29.787692903 +0000 UTC m=+0.031173148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:04:29 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:04:29 np0005480824 podman[302279]: 2025-10-11 04:04:29.918514128 +0000 UTC m=+0.161994443 container init 95630c9027bdaae502284a808f5663af0107d0bc6e21d6f0ae1d53c754dfc3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 00:04:29 np0005480824 podman[302279]: 2025-10-11 04:04:29.931236348 +0000 UTC m=+0.174716583 container start 95630c9027bdaae502284a808f5663af0107d0bc6e21d6f0ae1d53c754dfc3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_thompson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 00:04:29 np0005480824 podman[302279]: 2025-10-11 04:04:29.935654052 +0000 UTC m=+0.179134317 container attach 95630c9027bdaae502284a808f5663af0107d0bc6e21d6f0ae1d53c754dfc3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:04:29 np0005480824 hungry_thompson[302295]: 167 167
Oct 11 00:04:29 np0005480824 systemd[1]: libpod-95630c9027bdaae502284a808f5663af0107d0bc6e21d6f0ae1d53c754dfc3f7.scope: Deactivated successfully.
Oct 11 00:04:29 np0005480824 conmon[302295]: conmon 95630c9027bdaae50228 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95630c9027bdaae502284a808f5663af0107d0bc6e21d6f0ae1d53c754dfc3f7.scope/container/memory.events
Oct 11 00:04:29 np0005480824 podman[302279]: 2025-10-11 04:04:29.943290463 +0000 UTC m=+0.186770728 container died 95630c9027bdaae502284a808f5663af0107d0bc6e21d6f0ae1d53c754dfc3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 00:04:29 np0005480824 systemd[1]: var-lib-containers-storage-overlay-cdbb9a25dc9bb10d0b37567b3091e345646b7c65d35e91fc5a6917b68320d8da-merged.mount: Deactivated successfully.
Oct 11 00:04:29 np0005480824 podman[302279]: 2025-10-11 04:04:29.998020234 +0000 UTC m=+0.241500459 container remove 95630c9027bdaae502284a808f5663af0107d0bc6e21d6f0ae1d53c754dfc3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_thompson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:04:30 np0005480824 systemd[1]: libpod-conmon-95630c9027bdaae502284a808f5663af0107d0bc6e21d6f0ae1d53c754dfc3f7.scope: Deactivated successfully.
Oct 11 00:04:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e491 do_prune osdmap full prune enabled
Oct 11 00:04:30 np0005480824 podman[302319]: 2025-10-11 04:04:30.297816916 +0000 UTC m=+0.125394990 container create 537bf642d2c0cffac55dc6501a57f0631ee39adc28298730d18c609066f44c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kilby, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:04:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e492 e492: 3 total, 3 up, 3 in
Oct 11 00:04:30 np0005480824 podman[302319]: 2025-10-11 04:04:30.214099521 +0000 UTC m=+0.041677605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:04:30 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e492: 3 total, 3 up, 3 in
Oct 11 00:04:30 np0005480824 systemd[1]: Started libpod-conmon-537bf642d2c0cffac55dc6501a57f0631ee39adc28298730d18c609066f44c3d.scope.
Oct 11 00:04:30 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:04:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc734062d6b9c688be119fa35a04e02ffed01b8d4c5b6abdc931a326358037f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:04:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc734062d6b9c688be119fa35a04e02ffed01b8d4c5b6abdc931a326358037f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:04:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc734062d6b9c688be119fa35a04e02ffed01b8d4c5b6abdc931a326358037f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:04:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc734062d6b9c688be119fa35a04e02ffed01b8d4c5b6abdc931a326358037f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:04:30 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc734062d6b9c688be119fa35a04e02ffed01b8d4c5b6abdc931a326358037f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 00:04:30 np0005480824 podman[302319]: 2025-10-11 04:04:30.396928624 +0000 UTC m=+0.224506668 container init 537bf642d2c0cffac55dc6501a57f0631ee39adc28298730d18c609066f44c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kilby, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 00:04:30 np0005480824 podman[302319]: 2025-10-11 04:04:30.402927855 +0000 UTC m=+0.230505889 container start 537bf642d2c0cffac55dc6501a57f0631ee39adc28298730d18c609066f44c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 00:04:30 np0005480824 podman[302319]: 2025-10-11 04:04:30.406501099 +0000 UTC m=+0.234079123 container attach 537bf642d2c0cffac55dc6501a57f0631ee39adc28298730d18c609066f44c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 00:04:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e492 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:04:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:04:31 np0005480824 peaceful_kilby[302336]: --> passed data devices: 0 physical, 3 LVM
Oct 11 00:04:31 np0005480824 peaceful_kilby[302336]: --> relative data size: 1.0
Oct 11 00:04:31 np0005480824 peaceful_kilby[302336]: --> All data devices are unavailable
Oct 11 00:04:31 np0005480824 systemd[1]: libpod-537bf642d2c0cffac55dc6501a57f0631ee39adc28298730d18c609066f44c3d.scope: Deactivated successfully.
Oct 11 00:04:31 np0005480824 podman[302319]: 2025-10-11 04:04:31.607169282 +0000 UTC m=+1.434747326 container died 537bf642d2c0cffac55dc6501a57f0631ee39adc28298730d18c609066f44c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kilby, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 00:04:31 np0005480824 systemd[1]: libpod-537bf642d2c0cffac55dc6501a57f0631ee39adc28298730d18c609066f44c3d.scope: Consumed 1.157s CPU time.
Oct 11 00:04:31 np0005480824 systemd[1]: var-lib-containers-storage-overlay-4fc734062d6b9c688be119fa35a04e02ffed01b8d4c5b6abdc931a326358037f-merged.mount: Deactivated successfully.
Oct 11 00:04:31 np0005480824 podman[302319]: 2025-10-11 04:04:31.700082864 +0000 UTC m=+1.527660938 container remove 537bf642d2c0cffac55dc6501a57f0631ee39adc28298730d18c609066f44c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 00:04:31 np0005480824 systemd[1]: libpod-conmon-537bf642d2c0cffac55dc6501a57f0631ee39adc28298730d18c609066f44c3d.scope: Deactivated successfully.
Oct 11 00:04:32 np0005480824 nova_compute[260089]: 2025-10-11 04:04:32.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:32 np0005480824 podman[302517]: 2025-10-11 04:04:32.53717109 +0000 UTC m=+0.036617684 container create c2d357620c526c0ddd0e7b8720c8bf260e4a8d3db02742549fe242c7a413b63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 00:04:32 np0005480824 systemd[1]: Started libpod-conmon-c2d357620c526c0ddd0e7b8720c8bf260e4a8d3db02742549fe242c7a413b63b.scope.
Oct 11 00:04:32 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:04:32 np0005480824 podman[302517]: 2025-10-11 04:04:32.520788414 +0000 UTC m=+0.020234998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:04:32 np0005480824 podman[302517]: 2025-10-11 04:04:32.616830379 +0000 UTC m=+0.116276983 container init c2d357620c526c0ddd0e7b8720c8bf260e4a8d3db02742549fe242c7a413b63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 00:04:32 np0005480824 podman[302517]: 2025-10-11 04:04:32.622454302 +0000 UTC m=+0.121900866 container start c2d357620c526c0ddd0e7b8720c8bf260e4a8d3db02742549fe242c7a413b63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:04:32 np0005480824 podman[302517]: 2025-10-11 04:04:32.625728549 +0000 UTC m=+0.125175173 container attach c2d357620c526c0ddd0e7b8720c8bf260e4a8d3db02742549fe242c7a413b63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 00:04:32 np0005480824 elated_pascal[302533]: 167 167
Oct 11 00:04:32 np0005480824 systemd[1]: libpod-c2d357620c526c0ddd0e7b8720c8bf260e4a8d3db02742549fe242c7a413b63b.scope: Deactivated successfully.
Oct 11 00:04:32 np0005480824 podman[302517]: 2025-10-11 04:04:32.627806329 +0000 UTC m=+0.127252893 container died c2d357620c526c0ddd0e7b8720c8bf260e4a8d3db02742549fe242c7a413b63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct 11 00:04:32 np0005480824 systemd[1]: var-lib-containers-storage-overlay-cb40682c72006ba6c54f3a3773de987788793a9e73a419330a779ec59fa35af5-merged.mount: Deactivated successfully.
Oct 11 00:04:32 np0005480824 podman[302517]: 2025-10-11 04:04:32.667127786 +0000 UTC m=+0.166574360 container remove c2d357620c526c0ddd0e7b8720c8bf260e4a8d3db02742549fe242c7a413b63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:04:32 np0005480824 systemd[1]: libpod-conmon-c2d357620c526c0ddd0e7b8720c8bf260e4a8d3db02742549fe242c7a413b63b.scope: Deactivated successfully.
Oct 11 00:04:32 np0005480824 podman[302556]: 2025-10-11 04:04:32.80800817 +0000 UTC m=+0.035297515 container create d8023987e95c48e49c0a908d5da9f147e571c3bd7173b1894c3804b4bcc9f93d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cori, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:04:32 np0005480824 systemd[1]: Started libpod-conmon-d8023987e95c48e49c0a908d5da9f147e571c3bd7173b1894c3804b4bcc9f93d.scope.
Oct 11 00:04:32 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:04:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917980626bf1a5a34b9da198b43df487c0b48b64b6d372611ee38697d0ece444/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:04:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917980626bf1a5a34b9da198b43df487c0b48b64b6d372611ee38697d0ece444/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:04:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917980626bf1a5a34b9da198b43df487c0b48b64b6d372611ee38697d0ece444/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:04:32 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917980626bf1a5a34b9da198b43df487c0b48b64b6d372611ee38697d0ece444/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:04:32 np0005480824 podman[302556]: 2025-10-11 04:04:32.794147933 +0000 UTC m=+0.021437308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:04:32 np0005480824 podman[302556]: 2025-10-11 04:04:32.891447757 +0000 UTC m=+0.118737112 container init d8023987e95c48e49c0a908d5da9f147e571c3bd7173b1894c3804b4bcc9f93d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 00:04:32 np0005480824 podman[302556]: 2025-10-11 04:04:32.897977872 +0000 UTC m=+0.125267227 container start d8023987e95c48e49c0a908d5da9f147e571c3bd7173b1894c3804b4bcc9f93d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cori, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:04:32 np0005480824 podman[302556]: 2025-10-11 04:04:32.90171561 +0000 UTC m=+0.129004995 container attach d8023987e95c48e49c0a908d5da9f147e571c3bd7173b1894c3804b4bcc9f93d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 00:04:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 21 op/s
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]: {
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:    "0": [
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:        {
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "devices": [
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "/dev/loop3"
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            ],
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_name": "ceph_lv0",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_size": "21470642176",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "name": "ceph_lv0",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "tags": {
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.cluster_name": "ceph",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.crush_device_class": "",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.encrypted": "0",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.osd_id": "0",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.type": "block",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.vdo": "0"
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            },
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "type": "block",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "vg_name": "ceph_vg0"
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:        }
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:    ],
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:    "1": [
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:        {
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "devices": [
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "/dev/loop4"
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            ],
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_name": "ceph_lv1",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_size": "21470642176",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "name": "ceph_lv1",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "tags": {
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.cluster_name": "ceph",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.crush_device_class": "",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.encrypted": "0",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.osd_id": "1",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.type": "block",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.vdo": "0"
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            },
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "type": "block",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "vg_name": "ceph_vg1"
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:        }
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:    ],
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:    "2": [
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:        {
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "devices": [
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "/dev/loop5"
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            ],
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_name": "ceph_lv2",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_size": "21470642176",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "name": "ceph_lv2",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "tags": {
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.cluster_name": "ceph",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.crush_device_class": "",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.encrypted": "0",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.osd_id": "2",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.type": "block",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:                "ceph.vdo": "0"
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            },
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "type": "block",
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:            "vg_name": "ceph_vg2"
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:        }
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]:    ]
Oct 11 00:04:33 np0005480824 wonderful_cori[302572]: }
Oct 11 00:04:33 np0005480824 systemd[1]: libpod-d8023987e95c48e49c0a908d5da9f147e571c3bd7173b1894c3804b4bcc9f93d.scope: Deactivated successfully.
Oct 11 00:04:33 np0005480824 podman[302556]: 2025-10-11 04:04:33.651038466 +0000 UTC m=+0.878327821 container died d8023987e95c48e49c0a908d5da9f147e571c3bd7173b1894c3804b4bcc9f93d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 00:04:33 np0005480824 systemd[1]: var-lib-containers-storage-overlay-917980626bf1a5a34b9da198b43df487c0b48b64b6d372611ee38697d0ece444-merged.mount: Deactivated successfully.
Oct 11 00:04:33 np0005480824 podman[302556]: 2025-10-11 04:04:33.724689073 +0000 UTC m=+0.951978438 container remove d8023987e95c48e49c0a908d5da9f147e571c3bd7173b1894c3804b4bcc9f93d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cori, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:04:33 np0005480824 systemd[1]: libpod-conmon-d8023987e95c48e49c0a908d5da9f147e571c3bd7173b1894c3804b4bcc9f93d.scope: Deactivated successfully.
Oct 11 00:04:34 np0005480824 podman[302736]: 2025-10-11 04:04:34.337513999 +0000 UTC m=+0.037563287 container create 1ba0c76023239c1cfd67520d214fb12342010ab064c50a44721217dded27cfc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 00:04:34 np0005480824 systemd[1]: Started libpod-conmon-1ba0c76023239c1cfd67520d214fb12342010ab064c50a44721217dded27cfc0.scope.
Oct 11 00:04:34 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:04:34 np0005480824 podman[302736]: 2025-10-11 04:04:34.32103534 +0000 UTC m=+0.021084668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:04:34 np0005480824 podman[302736]: 2025-10-11 04:04:34.418798116 +0000 UTC m=+0.118847434 container init 1ba0c76023239c1cfd67520d214fb12342010ab064c50a44721217dded27cfc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:04:34 np0005480824 podman[302736]: 2025-10-11 04:04:34.425644598 +0000 UTC m=+0.125693896 container start 1ba0c76023239c1cfd67520d214fb12342010ab064c50a44721217dded27cfc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:04:34 np0005480824 podman[302736]: 2025-10-11 04:04:34.429026537 +0000 UTC m=+0.129075845 container attach 1ba0c76023239c1cfd67520d214fb12342010ab064c50a44721217dded27cfc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 00:04:34 np0005480824 tender_rubin[302753]: 167 167
Oct 11 00:04:34 np0005480824 systemd[1]: libpod-1ba0c76023239c1cfd67520d214fb12342010ab064c50a44721217dded27cfc0.scope: Deactivated successfully.
Oct 11 00:04:34 np0005480824 podman[302736]: 2025-10-11 04:04:34.432905469 +0000 UTC m=+0.132954767 container died 1ba0c76023239c1cfd67520d214fb12342010ab064c50a44721217dded27cfc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:04:34 np0005480824 systemd[1]: var-lib-containers-storage-overlay-2ff4e4e621dbea3f54691ba4dbf070183903f9d71efc1f65a0a6370c2aba5d95-merged.mount: Deactivated successfully.
Oct 11 00:04:34 np0005480824 podman[302736]: 2025-10-11 04:04:34.46427603 +0000 UTC m=+0.164325328 container remove 1ba0c76023239c1cfd67520d214fb12342010ab064c50a44721217dded27cfc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rubin, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:04:34 np0005480824 systemd[1]: libpod-conmon-1ba0c76023239c1cfd67520d214fb12342010ab064c50a44721217dded27cfc0.scope: Deactivated successfully.
Oct 11 00:04:34 np0005480824 podman[302776]: 2025-10-11 04:04:34.650414731 +0000 UTC m=+0.045418093 container create 03ca1e18822035b6bc44c29b03ceb9215dba0f668832adddcfb6b590450aff49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 00:04:34 np0005480824 systemd[1]: Started libpod-conmon-03ca1e18822035b6bc44c29b03ceb9215dba0f668832adddcfb6b590450aff49.scope.
Oct 11 00:04:34 np0005480824 podman[302776]: 2025-10-11 04:04:34.627915769 +0000 UTC m=+0.022919111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:04:34 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:04:34 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46149bc1d1208fe08242a865e8dc49549bd0de5014d881050a87fdeebad5b23a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:04:34 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46149bc1d1208fe08242a865e8dc49549bd0de5014d881050a87fdeebad5b23a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:04:34 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46149bc1d1208fe08242a865e8dc49549bd0de5014d881050a87fdeebad5b23a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:04:34 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46149bc1d1208fe08242a865e8dc49549bd0de5014d881050a87fdeebad5b23a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:04:34 np0005480824 podman[302776]: 2025-10-11 04:04:34.771332463 +0000 UTC m=+0.166335865 container init 03ca1e18822035b6bc44c29b03ceb9215dba0f668832adddcfb6b590450aff49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 00:04:34 np0005480824 podman[302776]: 2025-10-11 04:04:34.778769918 +0000 UTC m=+0.173773230 container start 03ca1e18822035b6bc44c29b03ceb9215dba0f668832adddcfb6b590450aff49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:04:34 np0005480824 podman[302776]: 2025-10-11 04:04:34.785257491 +0000 UTC m=+0.180260893 container attach 03ca1e18822035b6bc44c29b03ceb9215dba0f668832adddcfb6b590450aff49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 00:04:34 np0005480824 nova_compute[260089]: 2025-10-11 04:04:34.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 21 op/s
Oct 11 00:04:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e492 do_prune osdmap full prune enabled
Oct 11 00:04:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e493 e493: 3 total, 3 up, 3 in
Oct 11 00:04:35 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e493: 3 total, 3 up, 3 in
Oct 11 00:04:35 np0005480824 musing_swanson[302792]: {
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "osd_id": 0,
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "type": "bluestore"
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:    },
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "osd_id": 1,
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "type": "bluestore"
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:    },
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "osd_id": 2,
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:        "type": "bluestore"
Oct 11 00:04:35 np0005480824 musing_swanson[302792]:    }
Oct 11 00:04:35 np0005480824 musing_swanson[302792]: }
Oct 11 00:04:35 np0005480824 systemd[1]: libpod-03ca1e18822035b6bc44c29b03ceb9215dba0f668832adddcfb6b590450aff49.scope: Deactivated successfully.
Oct 11 00:04:35 np0005480824 podman[302776]: 2025-10-11 04:04:35.725663724 +0000 UTC m=+1.120667036 container died 03ca1e18822035b6bc44c29b03ceb9215dba0f668832adddcfb6b590450aff49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 00:04:35 np0005480824 systemd[1]: var-lib-containers-storage-overlay-46149bc1d1208fe08242a865e8dc49549bd0de5014d881050a87fdeebad5b23a-merged.mount: Deactivated successfully.
Oct 11 00:04:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:04:35 np0005480824 podman[302776]: 2025-10-11 04:04:35.781792299 +0000 UTC m=+1.176795611 container remove 03ca1e18822035b6bc44c29b03ceb9215dba0f668832adddcfb6b590450aff49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 00:04:35 np0005480824 systemd[1]: libpod-conmon-03ca1e18822035b6bc44c29b03ceb9215dba0f668832adddcfb6b590450aff49.scope: Deactivated successfully.
Oct 11 00:04:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 00:04:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:04:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 00:04:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:04:35 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 138d642f-a632-45fd-8e8d-b62303008c8b does not exist
Oct 11 00:04:35 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev b27f2c46-7382-404f-b426-f48631c26023 does not exist
Oct 11 00:04:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:04:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2745018919' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:04:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:04:35 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2745018919' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:04:36 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:04:36 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:04:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.7 KiB/s wr, 30 op/s
Oct 11 00:04:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:04:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2744498868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:04:37 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:04:37 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2744498868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:04:37 np0005480824 nova_compute[260089]: 2025-10-11 04:04:37.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003474596275314189 of space, bias 1.0, pg target 0.10423788825942568 quantized to 32 (current 32)
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:04:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 00:04:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 3.4 KiB/s wr, 101 op/s
Oct 11 00:04:39 np0005480824 nova_compute[260089]: 2025-10-11 04:04:39.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:04:39 np0005480824 nova_compute[260089]: 2025-10-11 04:04:39.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:40 np0005480824 nova_compute[260089]: 2025-10-11 04:04:40.291 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:04:40 np0005480824 nova_compute[260089]: 2025-10-11 04:04:40.295 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:04:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:04:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e493 do_prune osdmap full prune enabled
Oct 11 00:04:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e494 e494: 3 total, 3 up, 3 in
Oct 11 00:04:40 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e494: 3 total, 3 up, 3 in
Oct 11 00:04:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 1.5 KiB/s wr, 84 op/s
Oct 11 00:04:42 np0005480824 nova_compute[260089]: 2025-10-11 04:04:42.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 1.9 KiB/s wr, 85 op/s
Oct 11 00:04:43 np0005480824 nova_compute[260089]: 2025-10-11 04:04:43.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:04:43 np0005480824 nova_compute[260089]: 2025-10-11 04:04:43.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:04:43 np0005480824 nova_compute[260089]: 2025-10-11 04:04:43.330 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:04:43 np0005480824 nova_compute[260089]: 2025-10-11 04:04:43.331 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:04:43 np0005480824 nova_compute[260089]: 2025-10-11 04:04:43.331 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:04:43 np0005480824 nova_compute[260089]: 2025-10-11 04:04:43.332 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 00:04:43 np0005480824 nova_compute[260089]: 2025-10-11 04:04:43.332 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:04:43 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:04:43 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/696804502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:04:43 np0005480824 nova_compute[260089]: 2025-10-11 04:04:43.808 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:04:43 np0005480824 podman[302912]: 2025-10-11 04:04:43.939084963 +0000 UTC m=+0.083256885 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 00:04:43 np0005480824 podman[302910]: 2025-10-11 04:04:43.945663378 +0000 UTC m=+0.091314545 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 11 00:04:43 np0005480824 nova_compute[260089]: 2025-10-11 04:04:43.977 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:04:43 np0005480824 nova_compute[260089]: 2025-10-11 04:04:43.978 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4325MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 00:04:43 np0005480824 nova_compute[260089]: 2025-10-11 04:04:43.978 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:04:43 np0005480824 nova_compute[260089]: 2025-10-11 04:04:43.978 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:04:44 np0005480824 nova_compute[260089]: 2025-10-11 04:04:44.053 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 00:04:44 np0005480824 nova_compute[260089]: 2025-10-11 04:04:44.053 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 00:04:44 np0005480824 nova_compute[260089]: 2025-10-11 04:04:44.068 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:04:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:04:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1625010071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:04:44 np0005480824 nova_compute[260089]: 2025-10-11 04:04:44.480 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:04:44 np0005480824 nova_compute[260089]: 2025-10-11 04:04:44.485 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:04:44 np0005480824 nova_compute[260089]: 2025-10-11 04:04:44.504 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:04:44 np0005480824 nova_compute[260089]: 2025-10-11 04:04:44.526 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 00:04:44 np0005480824 nova_compute[260089]: 2025-10-11 04:04:44.527 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:04:44 np0005480824 nova_compute[260089]: 2025-10-11 04:04:44.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 1.5 KiB/s wr, 70 op/s
Oct 11 00:04:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:04:45 np0005480824 nova_compute[260089]: 2025-10-11 04:04:45.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:45 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:04:45.835 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:04:45 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:04:45.836 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 00:04:45 np0005480824 ovn_controller[152667]: 2025-10-11T04:04:45Z|00256|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 11 00:04:46 np0005480824 nova_compute[260089]: 2025-10-11 04:04:46.527 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:04:46 np0005480824 nova_compute[260089]: 2025-10-11 04:04:46.528 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:04:46 np0005480824 nova_compute[260089]: 2025-10-11 04:04:46.528 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 00:04:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 1.1 KiB/s wr, 65 op/s
Oct 11 00:04:47 np0005480824 nova_compute[260089]: 2025-10-11 04:04:47.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:04:47 np0005480824 nova_compute[260089]: 2025-10-11 04:04:47.298 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 00:04:47 np0005480824 nova_compute[260089]: 2025-10-11 04:04:47.299 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 00:04:47 np0005480824 nova_compute[260089]: 2025-10-11 04:04:47.312 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 00:04:47 np0005480824 nova_compute[260089]: 2025-10-11 04:04:47.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:48 np0005480824 nova_compute[260089]: 2025-10-11 04:04:48.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:04:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 1 op/s
Oct 11 00:04:49 np0005480824 nova_compute[260089]: 2025-10-11 04:04:49.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:50 np0005480824 nova_compute[260089]: 2025-10-11 04:04:50.319 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:04:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:04:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 298 B/s wr, 1 op/s
Oct 11 00:04:52 np0005480824 podman[302972]: 2025-10-11 04:04:52.050580207 +0000 UTC m=+0.107840715 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 00:04:52 np0005480824 nova_compute[260089]: 2025-10-11 04:04:52.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:04:52 np0005480824 nova_compute[260089]: 2025-10-11 04:04:52.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 00:04:52 np0005480824 nova_compute[260089]: 2025-10-11 04:04:52.320 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 00:04:52 np0005480824 nova_compute[260089]: 2025-10-11 04:04:52.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:52 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:04:52.839 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:04:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Oct 11 00:04:54 np0005480824 nova_compute[260089]: 2025-10-11 04:04:54.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:04:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:04:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:04:57 np0005480824 nova_compute[260089]: 2025-10-11 04:04:57.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:04:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:04:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:04:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:04:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:04:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:04:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:04:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:04:59 np0005480824 podman[302999]: 2025-10-11 04:04:59.05902155 +0000 UTC m=+0.108145002 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 00:04:59 np0005480824 nova_compute[260089]: 2025-10-11 04:04:59.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:05:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:05:02 np0005480824 nova_compute[260089]: 2025-10-11 04:05:02.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:05:02 np0005480824 nova_compute[260089]: 2025-10-11 04:05:02.298 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 00:05:02 np0005480824 nova_compute[260089]: 2025-10-11 04:05:02.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:05:04 np0005480824 nova_compute[260089]: 2025-10-11 04:05:04.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:05:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:05:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:05:07 np0005480824 nova_compute[260089]: 2025-10-11 04:05:07.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:05:09 np0005480824 nova_compute[260089]: 2025-10-11 04:05:09.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:10.510 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:05:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:10.511 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:05:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:10.511 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:05:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:05:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:05:12 np0005480824 nova_compute[260089]: 2025-10-11 04:05:12.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:05:14 np0005480824 nova_compute[260089]: 2025-10-11 04:05:14.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:15 np0005480824 podman[303019]: 2025-10-11 04:05:15.012564082 +0000 UTC m=+0.072943741 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:05:15 np0005480824 podman[303018]: 2025-10-11 04:05:15.024105284 +0000 UTC m=+0.080440578 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:05:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:05:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:05:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:05:17 np0005480824 nova_compute[260089]: 2025-10-11 04:05:17.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:05:20 np0005480824 nova_compute[260089]: 2025-10-11 04:05:20.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:05:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail
Oct 11 00:05:22 np0005480824 nova_compute[260089]: 2025-10-11 04:05:22.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:23 np0005480824 podman[303057]: 2025-10-11 04:05:23.031424903 +0000 UTC m=+0.086439250 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:05:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 11 00:05:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e494 do_prune osdmap full prune enabled
Oct 11 00:05:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 e495: 3 total, 3 up, 3 in
Oct 11 00:05:23 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e495: 3 total, 3 up, 3 in
Oct 11 00:05:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:05:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3011140265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:05:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:05:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3011140265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:05:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:05:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4263983216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:05:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1864: 321 pgs: 321 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 26 KiB/s wr, 5 op/s
Oct 11 00:05:25 np0005480824 nova_compute[260089]: 2025-10-11 04:05:25.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:05:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 88 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 26 KiB/s wr, 7 op/s
Oct 11 00:05:27 np0005480824 nova_compute[260089]: 2025-10-11 04:05:27.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:05:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:05:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:05:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:05:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:05:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:05:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_04:05:27
Oct 11 00:05:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 00:05:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 11 00:05:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['images', 'default.rgw.log', '.mgr', 'volumes', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups']
Oct 11 00:05:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 11 00:05:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 00:05:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:05:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 00:05:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:05:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:05:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:05:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:05:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:05:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:05:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:05:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1866: 321 pgs: 321 active+clean; 88 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 28 KiB/s wr, 34 op/s
Oct 11 00:05:30 np0005480824 podman[303083]: 2025-10-11 04:05:30.009174674 +0000 UTC m=+0.065508006 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 11 00:05:30 np0005480824 nova_compute[260089]: 2025-10-11 04:05:30.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:30 np0005480824 nova_compute[260089]: 2025-10-11 04:05:30.604 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:05:30 np0005480824 nova_compute[260089]: 2025-10-11 04:05:30.604 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:05:30 np0005480824 nova_compute[260089]: 2025-10-11 04:05:30.624 2 DEBUG nova.compute.manager [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 00:05:30 np0005480824 nova_compute[260089]: 2025-10-11 04:05:30.734 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:05:30 np0005480824 nova_compute[260089]: 2025-10-11 04:05:30.734 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:05:30 np0005480824 nova_compute[260089]: 2025-10-11 04:05:30.743 2 DEBUG nova.virt.hardware [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 00:05:30 np0005480824 nova_compute[260089]: 2025-10-11 04:05:30.743 2 INFO nova.compute.claims [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 00:05:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:05:30 np0005480824 nova_compute[260089]: 2025-10-11 04:05:30.913 2 DEBUG oslo_concurrency.processutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:05:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 321 active+clean; 88 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 28 KiB/s wr, 34 op/s
Oct 11 00:05:31 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:05:31 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/238768132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.339 2 DEBUG oslo_concurrency.processutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.346 2 DEBUG nova.compute.provider_tree [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.365 2 DEBUG nova.scheduler.client.report [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.389 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.390 2 DEBUG nova.compute.manager [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.458 2 DEBUG nova.compute.manager [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.462 2 DEBUG nova.network.neutron [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.485 2 INFO nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.508 2 DEBUG nova.compute.manager [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.656 2 DEBUG nova.compute.manager [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.658 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.658 2 INFO nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Creating image(s)#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.679 2 DEBUG nova.storage.rbd_utils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] rbd image 7377f851-2bfd-4f43-9a9f-e08c288708bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.701 2 DEBUG nova.storage.rbd_utils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] rbd image 7377f851-2bfd-4f43-9a9f-e08c288708bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.722 2 DEBUG nova.storage.rbd_utils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] rbd image 7377f851-2bfd-4f43-9a9f-e08c288708bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.725 2 DEBUG oslo_concurrency.processutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.757 2 DEBUG nova.policy [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9202e7d8882475ba6a769d9c59c35fd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f367c6c5e8f479399a2004c82cfaff0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.816 2 DEBUG oslo_concurrency.processutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.817 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.818 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.818 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "cfffd1283a157d100c77a9cb8e3d536b83503a4e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.844 2 DEBUG nova.storage.rbd_utils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] rbd image 7377f851-2bfd-4f43-9a9f-e08c288708bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:05:31 np0005480824 nova_compute[260089]: 2025-10-11 04:05:31.848 2 DEBUG oslo_concurrency.processutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 7377f851-2bfd-4f43-9a9f-e08c288708bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:05:32 np0005480824 nova_compute[260089]: 2025-10-11 04:05:32.199 2 DEBUG oslo_concurrency.processutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cfffd1283a157d100c77a9cb8e3d536b83503a4e 7377f851-2bfd-4f43-9a9f-e08c288708bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:05:32 np0005480824 nova_compute[260089]: 2025-10-11 04:05:32.267 2 DEBUG nova.storage.rbd_utils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] resizing rbd image 7377f851-2bfd-4f43-9a9f-e08c288708bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 11 00:05:32 np0005480824 nova_compute[260089]: 2025-10-11 04:05:32.377 2 DEBUG nova.objects.instance [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lazy-loading 'migration_context' on Instance uuid 7377f851-2bfd-4f43-9a9f-e08c288708bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:05:32 np0005480824 nova_compute[260089]: 2025-10-11 04:05:32.397 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 11 00:05:32 np0005480824 nova_compute[260089]: 2025-10-11 04:05:32.398 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Ensure instance console log exists: /var/lib/nova/instances/7377f851-2bfd-4f43-9a9f-e08c288708bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 00:05:32 np0005480824 nova_compute[260089]: 2025-10-11 04:05:32.398 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:05:32 np0005480824 nova_compute[260089]: 2025-10-11 04:05:32.400 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:05:32 np0005480824 nova_compute[260089]: 2025-10-11 04:05:32.400 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:05:32 np0005480824 nova_compute[260089]: 2025-10-11 04:05:32.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 112 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 626 KiB/s wr, 33 op/s
Oct 11 00:05:33 np0005480824 nova_compute[260089]: 2025-10-11 04:05:33.955 2 DEBUG nova.network.neutron [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Successfully created port: 2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 00:05:34 np0005480824 nova_compute[260089]: 2025-10-11 04:05:34.837 2 DEBUG nova.network.neutron [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Successfully updated port: 2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 00:05:34 np0005480824 nova_compute[260089]: 2025-10-11 04:05:34.859 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "refresh_cache-7377f851-2bfd-4f43-9a9f-e08c288708bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:05:34 np0005480824 nova_compute[260089]: 2025-10-11 04:05:34.860 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquired lock "refresh_cache-7377f851-2bfd-4f43-9a9f-e08c288708bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:05:34 np0005480824 nova_compute[260089]: 2025-10-11 04:05:34.860 2 DEBUG nova.network.neutron [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 00:05:34 np0005480824 nova_compute[260089]: 2025-10-11 04:05:34.931 2 DEBUG nova.compute.manager [req-b6613d80-a516-4945-be70-48e526aa7ea5 req-b7a04b78-254e-4183-b384-1121e3c3ecfc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Received event network-changed-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:05:34 np0005480824 nova_compute[260089]: 2025-10-11 04:05:34.932 2 DEBUG nova.compute.manager [req-b6613d80-a516-4945-be70-48e526aa7ea5 req-b7a04b78-254e-4183-b384-1121e3c3ecfc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Refreshing instance network info cache due to event network-changed-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 00:05:34 np0005480824 nova_compute[260089]: 2025-10-11 04:05:34.932 2 DEBUG oslo_concurrency.lockutils [req-b6613d80-a516-4945-be70-48e526aa7ea5 req-b7a04b78-254e-4183-b384-1121e3c3ecfc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-7377f851-2bfd-4f43-9a9f-e08c288708bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.006 2 DEBUG nova.network.neutron [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 00:05:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 112 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 527 KiB/s wr, 28 op/s
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.841 2 DEBUG nova.network.neutron [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Updating instance_info_cache with network_info: [{"id": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "address": "fa:16:3e:50:e1:bc", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7dcbb9-ee", "ovs_interfaceid": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.865 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Releasing lock "refresh_cache-7377f851-2bfd-4f43-9a9f-e08c288708bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.866 2 DEBUG nova.compute.manager [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Instance network_info: |[{"id": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "address": "fa:16:3e:50:e1:bc", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7dcbb9-ee", "ovs_interfaceid": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.866 2 DEBUG oslo_concurrency.lockutils [req-b6613d80-a516-4945-be70-48e526aa7ea5 req-b7a04b78-254e-4183-b384-1121e3c3ecfc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-7377f851-2bfd-4f43-9a9f-e08c288708bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.867 2 DEBUG nova.network.neutron [req-b6613d80-a516-4945-be70-48e526aa7ea5 req-b7a04b78-254e-4183-b384-1121e3c3ecfc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Refreshing network info cache for port 2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.873 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Start _get_guest_xml network_info=[{"id": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "address": "fa:16:3e:50:e1:bc", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7dcbb9-ee", "ovs_interfaceid": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'encrypted': False, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'image_id': '7caca022-7dcc-40a9-8bd8-eb7d91b29390'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.878 2 WARNING nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.886 2 DEBUG nova.virt.libvirt.host [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.887 2 DEBUG nova.virt.libvirt.host [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.893 2 DEBUG nova.virt.libvirt.host [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.893 2 DEBUG nova.virt.libvirt.host [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.894 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.894 2 DEBUG nova.virt.hardware [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T03:44:59Z,direct_url=<?>,disk_format='qcow2',id=7caca022-7dcc-40a9-8bd8-eb7d91b29390,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a9b71164a3274fcfb966194e51cb4849',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T03:45:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.894 2 DEBUG nova.virt.hardware [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.895 2 DEBUG nova.virt.hardware [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.895 2 DEBUG nova.virt.hardware [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.895 2 DEBUG nova.virt.hardware [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.895 2 DEBUG nova.virt.hardware [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.895 2 DEBUG nova.virt.hardware [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.896 2 DEBUG nova.virt.hardware [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.896 2 DEBUG nova.virt.hardware [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.896 2 DEBUG nova.virt.hardware [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.896 2 DEBUG nova.virt.hardware [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 00:05:35 np0005480824 nova_compute[260089]: 2025-10-11 04:05:35.899 2 DEBUG oslo_concurrency.processutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905286566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.392 2 DEBUG oslo_concurrency.processutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.414 2 DEBUG nova.storage.rbd_utils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] rbd image 7377f851-2bfd-4f43-9a9f-e08c288708bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.417 2 DEBUG oslo_concurrency.processutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:05:36 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 7644d64b-3b5d-45cf-b546-be393e89cdcb does not exist
Oct 11 00:05:36 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev ee6349e2-ea63-4b63-b02e-d1b98fd730f2 does not exist
Oct 11 00:05:36 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 31c2066e-6ea4-4f35-a3c2-a323758b4713 does not exist
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:05:36 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3520695024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.826 2 DEBUG oslo_concurrency.processutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.828 2 DEBUG nova.virt.libvirt.vif [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:05:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1013089460',display_name='tempest-TestEncryptedCinderVolumes-server-1013089460',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1013089460',id=28,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJsDIDfSNAwePEgMuwA7xBN/CNWf2WTBxrS0eNSby1BYvryUdu81T4JHgw4ZgwBLm6Up7P/KY+UdANGn0GVi7gS1LoRSepP4VjwhsAtHrsZWXIIkKv1Uc4r08KMmzbHNA==',key_name='tempest-keypair-1355496260',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f367c6c5e8f479399a2004c82cfaff0',ramdisk_id='',reservation_id='r-h6oyfuj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-781713731',owner_user_name='tempest-TestEncryptedCinderVolumes-781713731-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:05:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f9202e7d8882475ba6a769d9c59c35fd',uuid=7377f851-2bfd-4f43-9a9f-e08c288708bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "address": "fa:16:3e:50:e1:bc", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7dcbb9-ee", "ovs_interfaceid": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.828 2 DEBUG nova.network.os_vif_util [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converting VIF {"id": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "address": "fa:16:3e:50:e1:bc", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7dcbb9-ee", "ovs_interfaceid": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.829 2 DEBUG nova.network.os_vif_util [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:e1:bc,bridge_name='br-int',has_traffic_filtering=True,id=2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b7dcbb9-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.830 2 DEBUG nova.objects.instance [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7377f851-2bfd-4f43-9a9f-e08c288708bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.852 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] End _get_guest_xml xml=<domain type="kvm">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  <uuid>7377f851-2bfd-4f43-9a9f-e08c288708bb</uuid>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  <name>instance-0000001c</name>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  <metadata>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-1013089460</nova:name>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 04:05:35</nova:creationTime>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:        <nova:user uuid="f9202e7d8882475ba6a769d9c59c35fd">tempest-TestEncryptedCinderVolumes-781713731-project-member</nova:user>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:        <nova:project uuid="6f367c6c5e8f479399a2004c82cfaff0">tempest-TestEncryptedCinderVolumes-781713731</nova:project>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <nova:root type="image" uuid="7caca022-7dcc-40a9-8bd8-eb7d91b29390"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:        <nova:port uuid="2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:        </nova:port>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  </metadata>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <system>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <entry name="serial">7377f851-2bfd-4f43-9a9f-e08c288708bb</entry>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <entry name="uuid">7377f851-2bfd-4f43-9a9f-e08c288708bb</entry>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    </system>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  <os>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  </os>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  <features>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <acpi/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <apic/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  </features>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  </clock>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  </cpu>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  <devices>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/7377f851-2bfd-4f43-9a9f-e08c288708bb_disk">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      </source>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      </auth>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    </disk>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/7377f851-2bfd-4f43-9a9f-e08c288708bb_disk.config">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      </source>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      </auth>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    </disk>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:50:e1:bc"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <target dev="tap2b7dcbb9-ee"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    </interface>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/7377f851-2bfd-4f43-9a9f-e08c288708bb/console.log" append="off"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    </serial>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <video>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    </video>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    </rng>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 11 00:05:36 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:    </memballoon>
Oct 11 00:05:36 np0005480824 nova_compute[260089]:  </devices>
Oct 11 00:05:36 np0005480824 nova_compute[260089]: </domain>
Oct 11 00:05:36 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.853 2 DEBUG nova.compute.manager [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Preparing to wait for external event network-vif-plugged-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.853 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.853 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.854 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.854 2 DEBUG nova.virt.libvirt.vif [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:05:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1013089460',display_name='tempest-TestEncryptedCinderVolumes-server-1013089460',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1013089460',id=28,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJsDIDfSNAwePEgMuwA7xBN/CNWf2WTBxrS0eNSby1BYvryUdu81T4JHgw4ZgwBLm6Up7P/KY+UdANGn0GVi7gS1LoRSepP4VjwhsAtHrsZWXIIkKv1Uc4r08KMmzbHNA==',key_name='tempest-keypair-1355496260',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f367c6c5e8f479399a2004c82cfaff0',ramdisk_id='',reservation_id='r-h6oyfuj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-781713731',owner_user_name='tempest-TestEncryptedCinderVolumes-781713731-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:05:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f9202e7d8882475ba6a769d9c59c35fd',uuid=7377f851-2bfd-4f43-9a9f-e08c288708bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "address": "fa:16:3e:50:e1:bc", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7dcbb9-ee", "ovs_interfaceid": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.854 2 DEBUG nova.network.os_vif_util [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converting VIF {"id": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "address": "fa:16:3e:50:e1:bc", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7dcbb9-ee", "ovs_interfaceid": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.855 2 DEBUG nova.network.os_vif_util [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:e1:bc,bridge_name='br-int',has_traffic_filtering=True,id=2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b7dcbb9-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.855 2 DEBUG os_vif [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:e1:bc,bridge_name='br-int',has_traffic_filtering=True,id=2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b7dcbb9-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.856 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.856 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b7dcbb9-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b7dcbb9-ee, col_values=(('external_ids', {'iface-id': '2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:e1:bc', 'vm-uuid': '7377f851-2bfd-4f43-9a9f-e08c288708bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:36 np0005480824 NetworkManager[44969]: <info>  [1760155536.8636] manager: (tap2b7dcbb9-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.869 2 INFO os_vif [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:e1:bc,bridge_name='br-int',has_traffic_filtering=True,id=2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b7dcbb9-ee')#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.925 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.925 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.925 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] No VIF found with MAC fa:16:3e:50:e1:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.926 2 INFO nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Using config drive#033[00m
Oct 11 00:05:36 np0005480824 nova_compute[260089]: 2025-10-11 04:05:36.947 2 DEBUG nova.storage.rbd_utils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] rbd image 7377f851-2bfd-4f43-9a9f-e08c288708bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:05:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 134 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Oct 11 00:05:37 np0005480824 nova_compute[260089]: 2025-10-11 04:05:37.311 2 INFO nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Creating config drive at /var/lib/nova/instances/7377f851-2bfd-4f43-9a9f-e08c288708bb/disk.config#033[00m
Oct 11 00:05:37 np0005480824 nova_compute[260089]: 2025-10-11 04:05:37.317 2 DEBUG oslo_concurrency.processutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7377f851-2bfd-4f43-9a9f-e08c288708bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_8gehxxf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:05:37 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:05:37 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:05:37 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:05:37 np0005480824 podman[303645]: 2025-10-11 04:05:37.381398728 +0000 UTC m=+0.053310388 container create 04a13dbf52d0302f003883e7a3f5d0b54839309cbe7c371714ce0b51fd585fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 00:05:37 np0005480824 nova_compute[260089]: 2025-10-11 04:05:37.408 2 DEBUG nova.network.neutron [req-b6613d80-a516-4945-be70-48e526aa7ea5 req-b7a04b78-254e-4183-b384-1121e3c3ecfc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Updated VIF entry in instance network info cache for port 2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 00:05:37 np0005480824 nova_compute[260089]: 2025-10-11 04:05:37.409 2 DEBUG nova.network.neutron [req-b6613d80-a516-4945-be70-48e526aa7ea5 req-b7a04b78-254e-4183-b384-1121e3c3ecfc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Updating instance_info_cache with network_info: [{"id": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "address": "fa:16:3e:50:e1:bc", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7dcbb9-ee", "ovs_interfaceid": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:05:37 np0005480824 systemd[1]: Started libpod-conmon-04a13dbf52d0302f003883e7a3f5d0b54839309cbe7c371714ce0b51fd585fcc.scope.
Oct 11 00:05:37 np0005480824 nova_compute[260089]: 2025-10-11 04:05:37.426 2 DEBUG oslo_concurrency.lockutils [req-b6613d80-a516-4945-be70-48e526aa7ea5 req-b7a04b78-254e-4183-b384-1121e3c3ecfc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-7377f851-2bfd-4f43-9a9f-e08c288708bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:05:37 np0005480824 podman[303645]: 2025-10-11 04:05:37.352846665 +0000 UTC m=+0.024758425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:05:37 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:05:37 np0005480824 nova_compute[260089]: 2025-10-11 04:05:37.464 2 DEBUG oslo_concurrency.processutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7377f851-2bfd-4f43-9a9f-e08c288708bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_8gehxxf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:05:37 np0005480824 podman[303645]: 2025-10-11 04:05:37.467717195 +0000 UTC m=+0.139628875 container init 04a13dbf52d0302f003883e7a3f5d0b54839309cbe7c371714ce0b51fd585fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 00:05:37 np0005480824 podman[303645]: 2025-10-11 04:05:37.477948556 +0000 UTC m=+0.149860216 container start 04a13dbf52d0302f003883e7a3f5d0b54839309cbe7c371714ce0b51fd585fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 00:05:37 np0005480824 podman[303645]: 2025-10-11 04:05:37.48152659 +0000 UTC m=+0.153438250 container attach 04a13dbf52d0302f003883e7a3f5d0b54839309cbe7c371714ce0b51fd585fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:05:37 np0005480824 eloquent_kirch[303665]: 167 167
Oct 11 00:05:37 np0005480824 systemd[1]: libpod-04a13dbf52d0302f003883e7a3f5d0b54839309cbe7c371714ce0b51fd585fcc.scope: Deactivated successfully.
Oct 11 00:05:37 np0005480824 podman[303645]: 2025-10-11 04:05:37.483865495 +0000 UTC m=+0.155777155 container died 04a13dbf52d0302f003883e7a3f5d0b54839309cbe7c371714ce0b51fd585fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:05:37 np0005480824 nova_compute[260089]: 2025-10-11 04:05:37.486 2 DEBUG nova.storage.rbd_utils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] rbd image 7377f851-2bfd-4f43-9a9f-e08c288708bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:05:37 np0005480824 nova_compute[260089]: 2025-10-11 04:05:37.489 2 DEBUG oslo_concurrency.processutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7377f851-2bfd-4f43-9a9f-e08c288708bb/disk.config 7377f851-2bfd-4f43-9a9f-e08c288708bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:05:37 np0005480824 systemd[1]: var-lib-containers-storage-overlay-36be3e2ab283e02a3fdff8ea5851f74742b0406bfa6bab3f724bbfea943be1ce-merged.mount: Deactivated successfully.
Oct 11 00:05:37 np0005480824 podman[303645]: 2025-10-11 04:05:37.528777594 +0000 UTC m=+0.200689264 container remove 04a13dbf52d0302f003883e7a3f5d0b54839309cbe7c371714ce0b51fd585fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 00:05:37 np0005480824 systemd[1]: libpod-conmon-04a13dbf52d0302f003883e7a3f5d0b54839309cbe7c371714ce0b51fd585fcc.scope: Deactivated successfully.
Oct 11 00:05:37 np0005480824 nova_compute[260089]: 2025-10-11 04:05:37.636 2 DEBUG oslo_concurrency.processutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7377f851-2bfd-4f43-9a9f-e08c288708bb/disk.config 7377f851-2bfd-4f43-9a9f-e08c288708bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:05:37 np0005480824 nova_compute[260089]: 2025-10-11 04:05:37.636 2 INFO nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Deleting local config drive /var/lib/nova/instances/7377f851-2bfd-4f43-9a9f-e08c288708bb/disk.config because it was imported into RBD.#033[00m
Oct 11 00:05:37 np0005480824 kernel: tap2b7dcbb9-ee: entered promiscuous mode
Oct 11 00:05:37 np0005480824 NetworkManager[44969]: <info>  [1760155537.6954] manager: (tap2b7dcbb9-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Oct 11 00:05:37 np0005480824 nova_compute[260089]: 2025-10-11 04:05:37.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:37 np0005480824 ovn_controller[152667]: 2025-10-11T04:05:37Z|00257|binding|INFO|Claiming lport 2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 for this chassis.
Oct 11 00:05:37 np0005480824 ovn_controller[152667]: 2025-10-11T04:05:37Z|00258|binding|INFO|2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3: Claiming fa:16:3e:50:e1:bc 10.100.0.13
Oct 11 00:05:37 np0005480824 nova_compute[260089]: 2025-10-11 04:05:37.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.712 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:e1:bc 10.100.0.13'], port_security=['fa:16:3e:50:e1:bc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7377f851-2bfd-4f43-9a9f-e08c288708bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f367c6c5e8f479399a2004c82cfaff0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17697d08-7b58-4e87-b49c-4e2b77e98db6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b37e59a3-7c4f-47c2-acd9-d3f9dd8c5f52, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.713 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 in datapath abadcf46-9a41-4911-85e0-fbcde2d48b79 bound to our chassis#033[00m
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.714 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abadcf46-9a41-4911-85e0-fbcde2d48b79#033[00m
Oct 11 00:05:37 np0005480824 systemd-udevd[303751]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.726 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9031b555-a7eb-4115-bc71-d0575b39e493]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.727 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabadcf46-91 in ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 00:05:37 np0005480824 systemd-machined[215071]: New machine qemu-28-instance-0000001c.
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.729 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabadcf46-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.730 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[5f222afc-da5d-4387-9d00-e325f5e7083b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.731 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[9dc350ff-190e-4472-8ecf-06486baabd44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:37 np0005480824 NetworkManager[44969]: <info>  [1760155537.7348] device (tap2b7dcbb9-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 00:05:37 np0005480824 NetworkManager[44969]: <info>  [1760155537.7362] device (tap2b7dcbb9-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.743 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[6f2598ee-bd29-4609-9c99-6495f262ccb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:37 np0005480824 systemd[1]: Started Virtual Machine qemu-28-instance-0000001c.
Oct 11 00:05:37 np0005480824 podman[303724]: 2025-10-11 04:05:37.67314503 +0000 UTC m=+0.028831761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:05:37 np0005480824 nova_compute[260089]: 2025-10-11 04:05:37.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:37 np0005480824 ovn_controller[152667]: 2025-10-11T04:05:37Z|00259|binding|INFO|Setting lport 2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 ovn-installed in OVS
Oct 11 00:05:37 np0005480824 ovn_controller[152667]: 2025-10-11T04:05:37Z|00260|binding|INFO|Setting lport 2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 up in Southbound
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.770 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[d9746cf3-0e0b-42a3-8415-397c11ff8480]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:37 np0005480824 nova_compute[260089]: 2025-10-11 04:05:37.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:37 np0005480824 podman[303724]: 2025-10-11 04:05:37.773976438 +0000 UTC m=+0.129663069 container create 0d1ede0b53bd11270a86742c863e2927c4abe440fe6ed58aa093203a0f3cd7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.798 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[66d86aa5-2e3d-455a-8169-86e8f7f9a54a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.802 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[be70f4bb-7494-4f61-93f6-58596a8afe20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:37 np0005480824 NetworkManager[44969]: <info>  [1760155537.8043] manager: (tapabadcf46-90): new Veth device (/org/freedesktop/NetworkManager/Devices/138)
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.837 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[8a50498e-9d61-404c-ad6c-352c87bc2c1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.840 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[2ead3527-5716-465e-9028-73a3d23aa660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:37 np0005480824 NetworkManager[44969]: <info>  [1760155537.8634] device (tapabadcf46-90): carrier: link connected
Oct 11 00:05:37 np0005480824 systemd[1]: Started libpod-conmon-0d1ede0b53bd11270a86742c863e2927c4abe440fe6ed58aa093203a0f3cd7b1.scope.
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.868 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[d267d155-bfd4-42c7-b224-0235fba6b66f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.887 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0037f8-9aab-4570-8d37-443d84860796]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabadcf46-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:c9:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500552, 'reachable_time': 35521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303787, 'error': None, 'target': 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:37 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:05:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af06c652d06aa5fffe8a56a5ffac925c13721a83449191555ed5cc1a1d80cf4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.900 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[81c7dee9-f40c-412e-8f42-f551e4323d18]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe72:c9bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500552, 'tstamp': 500552}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303789, 'error': None, 'target': 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af06c652d06aa5fffe8a56a5ffac925c13721a83449191555ed5cc1a1d80cf4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:05:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af06c652d06aa5fffe8a56a5ffac925c13721a83449191555ed5cc1a1d80cf4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:05:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af06c652d06aa5fffe8a56a5ffac925c13721a83449191555ed5cc1a1d80cf4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:05:37 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af06c652d06aa5fffe8a56a5ffac925c13721a83449191555ed5cc1a1d80cf4e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.920 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[70677a50-fb82-4ae5-8937-e916584fc278]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabadcf46-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:c9:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500552, 'reachable_time': 35521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303790, 'error': None, 'target': 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:37 np0005480824 podman[303724]: 2025-10-11 04:05:37.927812907 +0000 UTC m=+0.283499558 container init 0d1ede0b53bd11270a86742c863e2927c4abe440fe6ed58aa093203a0f3cd7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 00:05:37 np0005480824 podman[303724]: 2025-10-11 04:05:37.938645903 +0000 UTC m=+0.294332534 container start 0d1ede0b53bd11270a86742c863e2927c4abe440fe6ed58aa093203a0f3cd7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:05:37 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:37.953 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[60110a1c-a855-40d0-a6d6-631a0a2cb4d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:37 np0005480824 podman[303724]: 2025-10-11 04:05:37.964550154 +0000 UTC m=+0.320236795 container attach 0d1ede0b53bd11270a86742c863e2927c4abe440fe6ed58aa093203a0f3cd7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:38.009 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2ffa1447-8a74-4aeb-aa0b-c4a5b78d8e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:38.010 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabadcf46-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:38.011 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:38.012 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabadcf46-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:05:38 np0005480824 NetworkManager[44969]: <info>  [1760155538.0141] manager: (tapabadcf46-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Oct 11 00:05:38 np0005480824 kernel: tapabadcf46-90: entered promiscuous mode
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:38.017 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabadcf46-90, col_values=(('external_ids', {'iface-id': '7b1d2367-bac7-4671-94ac-6b3206b5485c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:38 np0005480824 ovn_controller[152667]: 2025-10-11T04:05:38Z|00261|binding|INFO|Releasing lport 7b1d2367-bac7-4671-94ac-6b3206b5485c from this chassis (sb_readonly=0)
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:38.032 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abadcf46-9a41-4911-85e0-fbcde2d48b79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abadcf46-9a41-4911-85e0-fbcde2d48b79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:38.033 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a298e8-457b-41d4-a08a-e49d94bba88f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:38.033 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: global
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-abadcf46-9a41-4911-85e0-fbcde2d48b79
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/abadcf46-9a41-4911-85e0-fbcde2d48b79.pid.haproxy
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID abadcf46-9a41-4911-85e0-fbcde2d48b79
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 00:05:38 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:05:38.035 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'env', 'PROCESS_TAG=haproxy-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abadcf46-9a41-4911-85e0-fbcde2d48b79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00035159302353975384 of space, bias 1.0, pg target 0.10547790706192615 quantized to 32 (current 32)
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:05:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.386 2 DEBUG nova.compute.manager [req-a099181f-e344-4934-8720-42685ae08adc req-4dfb5f4e-a539-4a63-8094-67abdebda5c2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Received event network-vif-plugged-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.386 2 DEBUG oslo_concurrency.lockutils [req-a099181f-e344-4934-8720-42685ae08adc req-4dfb5f4e-a539-4a63-8094-67abdebda5c2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.387 2 DEBUG oslo_concurrency.lockutils [req-a099181f-e344-4934-8720-42685ae08adc req-4dfb5f4e-a539-4a63-8094-67abdebda5c2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.388 2 DEBUG oslo_concurrency.lockutils [req-a099181f-e344-4934-8720-42685ae08adc req-4dfb5f4e-a539-4a63-8094-67abdebda5c2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.388 2 DEBUG nova.compute.manager [req-a099181f-e344-4934-8720-42685ae08adc req-4dfb5f4e-a539-4a63-8094-67abdebda5c2 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Processing event network-vif-plugged-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 00:05:38 np0005480824 podman[303866]: 2025-10-11 04:05:38.443896062 +0000 UTC m=+0.074499678 container create bf2a8641bfee013c7d88abd363906ceb1aec49c85723d404c6639b30f1b9e626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.vendor=CentOS)
Oct 11 00:05:38 np0005480824 systemd[1]: Started libpod-conmon-bf2a8641bfee013c7d88abd363906ceb1aec49c85723d404c6639b30f1b9e626.scope.
Oct 11 00:05:38 np0005480824 podman[303866]: 2025-10-11 04:05:38.410215297 +0000 UTC m=+0.040818953 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 00:05:38 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:05:38 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbb8280cb1f25cc337e8d6a53a8cb4082cf31c8913c9cef35e1c80d82ea93e7e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 00:05:38 np0005480824 podman[303866]: 2025-10-11 04:05:38.523993191 +0000 UTC m=+0.154596817 container init bf2a8641bfee013c7d88abd363906ceb1aec49c85723d404c6639b30f1b9e626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 00:05:38 np0005480824 podman[303866]: 2025-10-11 04:05:38.531395415 +0000 UTC m=+0.161999041 container start bf2a8641bfee013c7d88abd363906ceb1aec49c85723d404c6639b30f1b9e626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 00:05:38 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[303882]: [NOTICE]   (303886) : New worker (303888) forked
Oct 11 00:05:38 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[303882]: [NOTICE]   (303886) : Loading success.
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.571 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155538.5712264, 7377f851-2bfd-4f43-9a9f-e08c288708bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.572 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] VM Started (Lifecycle Event)#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.573 2 DEBUG nova.compute.manager [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.576 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.579 2 INFO nova.virt.libvirt.driver [-] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Instance spawned successfully.#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.579 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.593 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.598 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.601 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.602 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.602 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.603 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.603 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.604 2 DEBUG nova.virt.libvirt.driver [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.623 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.623 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155538.571496, 7377f851-2bfd-4f43-9a9f-e08c288708bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.623 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] VM Paused (Lifecycle Event)#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.644 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.647 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155538.5756538, 7377f851-2bfd-4f43-9a9f-e08c288708bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.647 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] VM Resumed (Lifecycle Event)#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.652 2 INFO nova.compute.manager [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Took 7.00 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.653 2 DEBUG nova.compute.manager [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.672 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.675 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.698 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.710 2 INFO nova.compute.manager [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Took 8.04 seconds to build instance.#033[00m
Oct 11 00:05:38 np0005480824 nova_compute[260089]: 2025-10-11 04:05:38.730 2 DEBUG oslo_concurrency.lockutils [None req-75339307-a422-42ca-9a28-2078a9c94add f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:05:38 np0005480824 keen_tharp[303785]: --> passed data devices: 0 physical, 3 LVM
Oct 11 00:05:38 np0005480824 keen_tharp[303785]: --> relative data size: 1.0
Oct 11 00:05:38 np0005480824 keen_tharp[303785]: --> All data devices are unavailable
Oct 11 00:05:38 np0005480824 systemd[1]: libpod-0d1ede0b53bd11270a86742c863e2927c4abe440fe6ed58aa093203a0f3cd7b1.scope: Deactivated successfully.
Oct 11 00:05:38 np0005480824 podman[303724]: 2025-10-11 04:05:38.959588306 +0000 UTC m=+1.315274987 container died 0d1ede0b53bd11270a86742c863e2927c4abe440fe6ed58aa093203a0f3cd7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:05:38 np0005480824 systemd[1]: var-lib-containers-storage-overlay-af06c652d06aa5fffe8a56a5ffac925c13721a83449191555ed5cc1a1d80cf4e-merged.mount: Deactivated successfully.
Oct 11 00:05:39 np0005480824 podman[303724]: 2025-10-11 04:05:39.021325372 +0000 UTC m=+1.377012003 container remove 0d1ede0b53bd11270a86742c863e2927c4abe440fe6ed58aa093203a0f3cd7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 11 00:05:39 np0005480824 systemd[1]: libpod-conmon-0d1ede0b53bd11270a86742c863e2927c4abe440fe6ed58aa093203a0f3cd7b1.scope: Deactivated successfully.
Oct 11 00:05:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 134 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Oct 11 00:05:39 np0005480824 podman[304078]: 2025-10-11 04:05:39.675960765 +0000 UTC m=+0.037705170 container create d9e03905a1d0cf5ea38a60500076abca34cd7c68dae71b096daa184239868ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:05:39 np0005480824 systemd[1]: Started libpod-conmon-d9e03905a1d0cf5ea38a60500076abca34cd7c68dae71b096daa184239868ff5.scope.
Oct 11 00:05:39 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:05:39 np0005480824 podman[304078]: 2025-10-11 04:05:39.74865922 +0000 UTC m=+0.110403645 container init d9e03905a1d0cf5ea38a60500076abca34cd7c68dae71b096daa184239868ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:05:39 np0005480824 podman[304078]: 2025-10-11 04:05:39.755816449 +0000 UTC m=+0.117560854 container start d9e03905a1d0cf5ea38a60500076abca34cd7c68dae71b096daa184239868ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 00:05:39 np0005480824 podman[304078]: 2025-10-11 04:05:39.659911516 +0000 UTC m=+0.021655941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:05:39 np0005480824 podman[304078]: 2025-10-11 04:05:39.759203779 +0000 UTC m=+0.120948184 container attach d9e03905a1d0cf5ea38a60500076abca34cd7c68dae71b096daa184239868ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:05:39 np0005480824 suspicious_keldysh[304094]: 167 167
Oct 11 00:05:39 np0005480824 systemd[1]: libpod-d9e03905a1d0cf5ea38a60500076abca34cd7c68dae71b096daa184239868ff5.scope: Deactivated successfully.
Oct 11 00:05:39 np0005480824 podman[304078]: 2025-10-11 04:05:39.761038892 +0000 UTC m=+0.122783297 container died d9e03905a1d0cf5ea38a60500076abca34cd7c68dae71b096daa184239868ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 00:05:39 np0005480824 systemd[1]: var-lib-containers-storage-overlay-ad7688d143b977aa5682d79f0a43d18a3cdff396be86388210c2c52ff36a5dfe-merged.mount: Deactivated successfully.
Oct 11 00:05:39 np0005480824 podman[304078]: 2025-10-11 04:05:39.804021406 +0000 UTC m=+0.165765811 container remove d9e03905a1d0cf5ea38a60500076abca34cd7c68dae71b096daa184239868ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 00:05:39 np0005480824 systemd[1]: libpod-conmon-d9e03905a1d0cf5ea38a60500076abca34cd7c68dae71b096daa184239868ff5.scope: Deactivated successfully.
Oct 11 00:05:39 np0005480824 podman[304119]: 2025-10-11 04:05:39.964739737 +0000 UTC m=+0.038395106 container create 1e176fdeebac91d4a4a817fc426ee390275b62eba87145aff543c8649f363fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:05:40 np0005480824 systemd[1]: Started libpod-conmon-1e176fdeebac91d4a4a817fc426ee390275b62eba87145aff543c8649f363fc0.scope.
Oct 11 00:05:40 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:05:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5030bedac9e141698ccc4b81df9d2f73656087d326c521b04ec12743a7bfca5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:05:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5030bedac9e141698ccc4b81df9d2f73656087d326c521b04ec12743a7bfca5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:05:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5030bedac9e141698ccc4b81df9d2f73656087d326c521b04ec12743a7bfca5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:05:40 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5030bedac9e141698ccc4b81df9d2f73656087d326c521b04ec12743a7bfca5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:05:40 np0005480824 podman[304119]: 2025-10-11 04:05:39.948786341 +0000 UTC m=+0.022441730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:05:40 np0005480824 podman[304119]: 2025-10-11 04:05:40.059424901 +0000 UTC m=+0.133080280 container init 1e176fdeebac91d4a4a817fc426ee390275b62eba87145aff543c8649f363fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wright, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 00:05:40 np0005480824 podman[304119]: 2025-10-11 04:05:40.068056505 +0000 UTC m=+0.141711874 container start 1e176fdeebac91d4a4a817fc426ee390275b62eba87145aff543c8649f363fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:05:40 np0005480824 podman[304119]: 2025-10-11 04:05:40.07127279 +0000 UTC m=+0.144928189 container attach 1e176fdeebac91d4a4a817fc426ee390275b62eba87145aff543c8649f363fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wright, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 00:05:40 np0005480824 nova_compute[260089]: 2025-10-11 04:05:40.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:40 np0005480824 nova_compute[260089]: 2025-10-11 04:05:40.317 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:05:40 np0005480824 nova_compute[260089]: 2025-10-11 04:05:40.491 2 DEBUG nova.compute.manager [req-f7859ea6-7958-4214-bf91-355c67e8e9e3 req-09191982-0435-4c9b-be2a-642a37dcc8f7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Received event network-vif-plugged-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:05:40 np0005480824 nova_compute[260089]: 2025-10-11 04:05:40.492 2 DEBUG oslo_concurrency.lockutils [req-f7859ea6-7958-4214-bf91-355c67e8e9e3 req-09191982-0435-4c9b-be2a-642a37dcc8f7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:05:40 np0005480824 nova_compute[260089]: 2025-10-11 04:05:40.492 2 DEBUG oslo_concurrency.lockutils [req-f7859ea6-7958-4214-bf91-355c67e8e9e3 req-09191982-0435-4c9b-be2a-642a37dcc8f7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:05:40 np0005480824 nova_compute[260089]: 2025-10-11 04:05:40.493 2 DEBUG oslo_concurrency.lockutils [req-f7859ea6-7958-4214-bf91-355c67e8e9e3 req-09191982-0435-4c9b-be2a-642a37dcc8f7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:05:40 np0005480824 nova_compute[260089]: 2025-10-11 04:05:40.493 2 DEBUG nova.compute.manager [req-f7859ea6-7958-4214-bf91-355c67e8e9e3 req-09191982-0435-4c9b-be2a-642a37dcc8f7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] No waiting events found dispatching network-vif-plugged-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:05:40 np0005480824 nova_compute[260089]: 2025-10-11 04:05:40.493 2 WARNING nova.compute.manager [req-f7859ea6-7958-4214-bf91-355c67e8e9e3 req-09191982-0435-4c9b-be2a-642a37dcc8f7 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Received unexpected event network-vif-plugged-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 for instance with vm_state active and task_state None.#033[00m
Oct 11 00:05:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]: {
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:    "0": [
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:        {
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "devices": [
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "/dev/loop3"
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            ],
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_name": "ceph_lv0",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_size": "21470642176",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "name": "ceph_lv0",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "tags": {
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.cluster_name": "ceph",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.crush_device_class": "",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.encrypted": "0",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.osd_id": "0",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.type": "block",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.vdo": "0"
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            },
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "type": "block",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "vg_name": "ceph_vg0"
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:        }
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:    ],
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:    "1": [
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:        {
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "devices": [
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "/dev/loop4"
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            ],
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_name": "ceph_lv1",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_size": "21470642176",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "name": "ceph_lv1",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "tags": {
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.cluster_name": "ceph",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.crush_device_class": "",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.encrypted": "0",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.osd_id": "1",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.type": "block",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.vdo": "0"
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            },
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "type": "block",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "vg_name": "ceph_vg1"
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:        }
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:    ],
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:    "2": [
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:        {
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "devices": [
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "/dev/loop5"
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            ],
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_name": "ceph_lv2",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_size": "21470642176",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "name": "ceph_lv2",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "tags": {
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.cluster_name": "ceph",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.crush_device_class": "",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.encrypted": "0",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.osd_id": "2",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.type": "block",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:                "ceph.vdo": "0"
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            },
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "type": "block",
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:            "vg_name": "ceph_vg2"
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:        }
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]:    ]
Oct 11 00:05:40 np0005480824 optimistic_wright[304135]: }
Oct 11 00:05:40 np0005480824 systemd[1]: libpod-1e176fdeebac91d4a4a817fc426ee390275b62eba87145aff543c8649f363fc0.scope: Deactivated successfully.
Oct 11 00:05:40 np0005480824 podman[304119]: 2025-10-11 04:05:40.826804053 +0000 UTC m=+0.900459432 container died 1e176fdeebac91d4a4a817fc426ee390275b62eba87145aff543c8649f363fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 00:05:41 np0005480824 systemd[1]: var-lib-containers-storage-overlay-5030bedac9e141698ccc4b81df9d2f73656087d326c521b04ec12743a7bfca5f-merged.mount: Deactivated successfully.
Oct 11 00:05:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 134 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 11 00:05:41 np0005480824 nova_compute[260089]: 2025-10-11 04:05:41.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:05:41 np0005480824 podman[304119]: 2025-10-11 04:05:41.498357464 +0000 UTC m=+1.572012873 container remove 1e176fdeebac91d4a4a817fc426ee390275b62eba87145aff543c8649f363fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 00:05:41 np0005480824 systemd[1]: libpod-conmon-1e176fdeebac91d4a4a817fc426ee390275b62eba87145aff543c8649f363fc0.scope: Deactivated successfully.
Oct 11 00:05:41 np0005480824 nova_compute[260089]: 2025-10-11 04:05:41.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:41 np0005480824 nova_compute[260089]: 2025-10-11 04:05:41.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:41 np0005480824 NetworkManager[44969]: <info>  [1760155541.9103] manager: (patch-br-int-to-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Oct 11 00:05:41 np0005480824 NetworkManager[44969]: <info>  [1760155541.9111] manager: (patch-provnet-e62e0ad0-b027-41f2-91f0-70373ec97251-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Oct 11 00:05:42 np0005480824 nova_compute[260089]: 2025-10-11 04:05:41.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:42 np0005480824 ovn_controller[152667]: 2025-10-11T04:05:42Z|00262|binding|INFO|Releasing lport 7b1d2367-bac7-4671-94ac-6b3206b5485c from this chassis (sb_readonly=0)
Oct 11 00:05:42 np0005480824 nova_compute[260089]: 2025-10-11 04:05:42.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:42 np0005480824 podman[304298]: 2025-10-11 04:05:42.198773797 +0000 UTC m=+0.104220390 container create 0598f0844e05b035e68aacd76a329c716149c1de82113e966bab9592c1eca2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cori, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:05:42 np0005480824 podman[304298]: 2025-10-11 04:05:42.116283211 +0000 UTC m=+0.021729794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:05:42 np0005480824 nova_compute[260089]: 2025-10-11 04:05:42.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:05:42 np0005480824 systemd[1]: Started libpod-conmon-0598f0844e05b035e68aacd76a329c716149c1de82113e966bab9592c1eca2ca.scope.
Oct 11 00:05:42 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:05:42 np0005480824 podman[304298]: 2025-10-11 04:05:42.398340974 +0000 UTC m=+0.303787597 container init 0598f0844e05b035e68aacd76a329c716149c1de82113e966bab9592c1eca2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cori, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:05:42 np0005480824 podman[304298]: 2025-10-11 04:05:42.405149105 +0000 UTC m=+0.310595688 container start 0598f0844e05b035e68aacd76a329c716149c1de82113e966bab9592c1eca2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cori, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:05:42 np0005480824 angry_cori[304313]: 167 167
Oct 11 00:05:42 np0005480824 systemd[1]: libpod-0598f0844e05b035e68aacd76a329c716149c1de82113e966bab9592c1eca2ca.scope: Deactivated successfully.
Oct 11 00:05:42 np0005480824 podman[304298]: 2025-10-11 04:05:42.466519003 +0000 UTC m=+0.371965606 container attach 0598f0844e05b035e68aacd76a329c716149c1de82113e966bab9592c1eca2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cori, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:05:42 np0005480824 podman[304298]: 2025-10-11 04:05:42.466941202 +0000 UTC m=+0.372387785 container died 0598f0844e05b035e68aacd76a329c716149c1de82113e966bab9592c1eca2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cori, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:05:42 np0005480824 nova_compute[260089]: 2025-10-11 04:05:42.559 2 DEBUG nova.compute.manager [req-a279c992-f6e5-4c3f-8297-45a3d2ca5ea9 req-9dbf9782-2c32-4663-9bfd-05a660bc4738 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Received event network-changed-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:05:42 np0005480824 nova_compute[260089]: 2025-10-11 04:05:42.560 2 DEBUG nova.compute.manager [req-a279c992-f6e5-4c3f-8297-45a3d2ca5ea9 req-9dbf9782-2c32-4663-9bfd-05a660bc4738 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Refreshing instance network info cache due to event network-changed-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 00:05:42 np0005480824 nova_compute[260089]: 2025-10-11 04:05:42.561 2 DEBUG oslo_concurrency.lockutils [req-a279c992-f6e5-4c3f-8297-45a3d2ca5ea9 req-9dbf9782-2c32-4663-9bfd-05a660bc4738 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-7377f851-2bfd-4f43-9a9f-e08c288708bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:05:42 np0005480824 nova_compute[260089]: 2025-10-11 04:05:42.561 2 DEBUG oslo_concurrency.lockutils [req-a279c992-f6e5-4c3f-8297-45a3d2ca5ea9 req-9dbf9782-2c32-4663-9bfd-05a660bc4738 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-7377f851-2bfd-4f43-9a9f-e08c288708bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:05:42 np0005480824 nova_compute[260089]: 2025-10-11 04:05:42.561 2 DEBUG nova.network.neutron [req-a279c992-f6e5-4c3f-8297-45a3d2ca5ea9 req-9dbf9782-2c32-4663-9bfd-05a660bc4738 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Refreshing network info cache for port 2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 00:05:42 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0402e8972a8bcaac502f75c3b010c0e69b3c950cf486ed9804783a030a47c11a-merged.mount: Deactivated successfully.
Oct 11 00:05:42 np0005480824 podman[304298]: 2025-10-11 04:05:42.858574981 +0000 UTC m=+0.764021604 container remove 0598f0844e05b035e68aacd76a329c716149c1de82113e966bab9592c1eca2ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 00:05:42 np0005480824 systemd[1]: libpod-conmon-0598f0844e05b035e68aacd76a329c716149c1de82113e966bab9592c1eca2ca.scope: Deactivated successfully.
Oct 11 00:05:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 134 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 00:05:43 np0005480824 podman[304337]: 2025-10-11 04:05:43.087535221 +0000 UTC m=+0.082000684 container create cec256175f4db260cfd33b2ab8e51d906e1bd36911c249f06e71d5ee05d355f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_margulis, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:05:43 np0005480824 systemd[1]: Started libpod-conmon-cec256175f4db260cfd33b2ab8e51d906e1bd36911c249f06e71d5ee05d355f9.scope.
Oct 11 00:05:43 np0005480824 podman[304337]: 2025-10-11 04:05:43.071203646 +0000 UTC m=+0.065669129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:05:43 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:05:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96301d3a5e1517a8be0febe700bb80a724acba554c4888712b83f7a21d96ad7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:05:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96301d3a5e1517a8be0febe700bb80a724acba554c4888712b83f7a21d96ad7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:05:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96301d3a5e1517a8be0febe700bb80a724acba554c4888712b83f7a21d96ad7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:05:43 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96301d3a5e1517a8be0febe700bb80a724acba554c4888712b83f7a21d96ad7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:05:43 np0005480824 podman[304337]: 2025-10-11 04:05:43.210712077 +0000 UTC m=+0.205177580 container init cec256175f4db260cfd33b2ab8e51d906e1bd36911c249f06e71d5ee05d355f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 00:05:43 np0005480824 podman[304337]: 2025-10-11 04:05:43.219471614 +0000 UTC m=+0.213937117 container start cec256175f4db260cfd33b2ab8e51d906e1bd36911c249f06e71d5ee05d355f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_margulis, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 00:05:43 np0005480824 podman[304337]: 2025-10-11 04:05:43.223919349 +0000 UTC m=+0.218384842 container attach cec256175f4db260cfd33b2ab8e51d906e1bd36911c249f06e71d5ee05d355f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_margulis, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.008 2 DEBUG nova.network.neutron [req-a279c992-f6e5-4c3f-8297-45a3d2ca5ea9 req-9dbf9782-2c32-4663-9bfd-05a660bc4738 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Updated VIF entry in instance network info cache for port 2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.011 2 DEBUG nova.network.neutron [req-a279c992-f6e5-4c3f-8297-45a3d2ca5ea9 req-9dbf9782-2c32-4663-9bfd-05a660bc4738 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Updating instance_info_cache with network_info: [{"id": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "address": "fa:16:3e:50:e1:bc", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7dcbb9-ee", "ovs_interfaceid": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.030 2 DEBUG oslo_concurrency.lockutils [req-a279c992-f6e5-4c3f-8297-45a3d2ca5ea9 req-9dbf9782-2c32-4663-9bfd-05a660bc4738 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-7377f851-2bfd-4f43-9a9f-e08c288708bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]: {
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "osd_id": 0,
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "type": "bluestore"
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:    },
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "osd_id": 1,
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "type": "bluestore"
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:    },
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "osd_id": 2,
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:        "type": "bluestore"
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]:    }
Oct 11 00:05:44 np0005480824 thirsty_margulis[304353]: }
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.320 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.320 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.321 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.321 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.321 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:05:44 np0005480824 systemd[1]: libpod-cec256175f4db260cfd33b2ab8e51d906e1bd36911c249f06e71d5ee05d355f9.scope: Deactivated successfully.
Oct 11 00:05:44 np0005480824 systemd[1]: libpod-cec256175f4db260cfd33b2ab8e51d906e1bd36911c249f06e71d5ee05d355f9.scope: Consumed 1.095s CPU time.
Oct 11 00:05:44 np0005480824 podman[304337]: 2025-10-11 04:05:44.331938506 +0000 UTC m=+1.326403989 container died cec256175f4db260cfd33b2ab8e51d906e1bd36911c249f06e71d5ee05d355f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 00:05:44 np0005480824 systemd[1]: var-lib-containers-storage-overlay-96301d3a5e1517a8be0febe700bb80a724acba554c4888712b83f7a21d96ad7d-merged.mount: Deactivated successfully.
Oct 11 00:05:44 np0005480824 podman[304337]: 2025-10-11 04:05:44.498514785 +0000 UTC m=+1.492980258 container remove cec256175f4db260cfd33b2ab8e51d906e1bd36911c249f06e71d5ee05d355f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_margulis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 00:05:44 np0005480824 systemd[1]: libpod-conmon-cec256175f4db260cfd33b2ab8e51d906e1bd36911c249f06e71d5ee05d355f9.scope: Deactivated successfully.
Oct 11 00:05:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 00:05:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:05:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 00:05:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:05:44 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 969a277f-d918-4849-a21d-1cb806b5cf84 does not exist
Oct 11 00:05:44 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev d569c3ee-2882-478f-90a1-61906f010906 does not exist
Oct 11 00:05:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:05:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1582484020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.753 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.829 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.830 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.973 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.974 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4137MB free_disk=59.96735763549805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.975 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:05:44 np0005480824 nova_compute[260089]: 2025-10-11 04:05:44.975 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:05:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 134 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 96 op/s
Oct 11 00:05:45 np0005480824 nova_compute[260089]: 2025-10-11 04:05:45.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:45 np0005480824 nova_compute[260089]: 2025-10-11 04:05:45.088 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 7377f851-2bfd-4f43-9a9f-e08c288708bb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 00:05:45 np0005480824 nova_compute[260089]: 2025-10-11 04:05:45.089 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 00:05:45 np0005480824 nova_compute[260089]: 2025-10-11 04:05:45.090 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 00:05:45 np0005480824 nova_compute[260089]: 2025-10-11 04:05:45.141 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:05:45 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:05:45 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:05:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:05:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/729965829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:05:45 np0005480824 nova_compute[260089]: 2025-10-11 04:05:45.584 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:05:45 np0005480824 nova_compute[260089]: 2025-10-11 04:05:45.595 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:05:45 np0005480824 nova_compute[260089]: 2025-10-11 04:05:45.624 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:05:45 np0005480824 nova_compute[260089]: 2025-10-11 04:05:45.650 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 00:05:45 np0005480824 nova_compute[260089]: 2025-10-11 04:05:45.650 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:05:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:05:46 np0005480824 podman[304493]: 2025-10-11 04:05:46.030476903 +0000 UTC m=+0.080109870 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 00:05:46 np0005480824 podman[304492]: 2025-10-11 04:05:46.036158307 +0000 UTC m=+0.085633451 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:05:46 np0005480824 nova_compute[260089]: 2025-10-11 04:05:46.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 134 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 96 op/s
Oct 11 00:05:47 np0005480824 nova_compute[260089]: 2025-10-11 04:05:47.649 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:05:47 np0005480824 nova_compute[260089]: 2025-10-11 04:05:47.650 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 00:05:47 np0005480824 nova_compute[260089]: 2025-10-11 04:05:47.650 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 00:05:48 np0005480824 nova_compute[260089]: 2025-10-11 04:05:48.163 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "refresh_cache-7377f851-2bfd-4f43-9a9f-e08c288708bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:05:48 np0005480824 nova_compute[260089]: 2025-10-11 04:05:48.164 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquired lock "refresh_cache-7377f851-2bfd-4f43-9a9f-e08c288708bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:05:48 np0005480824 nova_compute[260089]: 2025-10-11 04:05:48.164 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 00:05:48 np0005480824 nova_compute[260089]: 2025-10-11 04:05:48.164 2 DEBUG nova.objects.instance [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7377f851-2bfd-4f43-9a9f-e08c288708bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:05:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 134 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 11 00:05:50 np0005480824 nova_compute[260089]: 2025-10-11 04:05:50.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:05:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 134 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 72 op/s
Oct 11 00:05:51 np0005480824 nova_compute[260089]: 2025-10-11 04:05:51.619 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Updating instance_info_cache with network_info: [{"id": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "address": "fa:16:3e:50:e1:bc", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7dcbb9-ee", "ovs_interfaceid": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:05:51 np0005480824 nova_compute[260089]: 2025-10-11 04:05:51.647 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Releasing lock "refresh_cache-7377f851-2bfd-4f43-9a9f-e08c288708bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:05:51 np0005480824 nova_compute[260089]: 2025-10-11 04:05:51.648 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 00:05:51 np0005480824 nova_compute[260089]: 2025-10-11 04:05:51.649 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:05:51 np0005480824 nova_compute[260089]: 2025-10-11 04:05:51.650 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:05:51 np0005480824 nova_compute[260089]: 2025-10-11 04:05:51.650 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:05:51 np0005480824 nova_compute[260089]: 2025-10-11 04:05:51.651 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 00:05:51 np0005480824 nova_compute[260089]: 2025-10-11 04:05:51.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:52 np0005480824 ovn_controller[152667]: 2025-10-11T04:05:52Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:50:e1:bc 10.100.0.13
Oct 11 00:05:52 np0005480824 ovn_controller[152667]: 2025-10-11T04:05:52Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:50:e1:bc 10.100.0.13
Oct 11 00:05:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 167 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 11 00:05:54 np0005480824 podman[304537]: 2025-10-11 04:05:54.080001335 +0000 UTC m=+0.128251996 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 00:05:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 167 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 00:05:55 np0005480824 nova_compute[260089]: 2025-10-11 04:05:55.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:05:56 np0005480824 nova_compute[260089]: 2025-10-11 04:05:56.294 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:05:56 np0005480824 nova_compute[260089]: 2025-10-11 04:05:56.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:05:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 167 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 00:05:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:05:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:05:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:05:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:05:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:05:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.495 2 DEBUG oslo_concurrency.lockutils [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.496 2 DEBUG oslo_concurrency.lockutils [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.518 2 DEBUG nova.objects.instance [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lazy-loading 'flavor' on Instance uuid 7377f851-2bfd-4f43-9a9f-e08c288708bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.602 2 DEBUG oslo_concurrency.lockutils [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.775 2 DEBUG oslo_concurrency.lockutils [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.776 2 DEBUG oslo_concurrency.lockutils [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.776 2 INFO nova.compute.manager [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Attaching volume d1c43d0c-6043-453c-89ba-4d660c1c0467 to /dev/vdb#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.902 2 DEBUG os_brick.utils [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.903 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.915 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.916 676 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3eab69-8017-4ef6-bf70-ade8d747b945]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.917 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.928 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.928 676 DEBUG oslo.privsep.daemon [-] privsep: reply[b768ca9d-cde2-40cb-8fe4-ee6b3872644f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.930 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.940 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.940 676 DEBUG oslo.privsep.daemon [-] privsep: reply[37fb4fe9-28ba-432e-8e9d-e3f894ad5eff]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.941 676 DEBUG oslo.privsep.daemon [-] privsep: reply[17b629ba-414c-45d3-a7ba-0e70f08f938b]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.942 2 DEBUG oslo_concurrency.processutils [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.964 2 DEBUG oslo_concurrency.processutils [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.966 2 DEBUG os_brick.initiator.connectors.lightos [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.967 2 DEBUG os_brick.initiator.connectors.lightos [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.967 2 DEBUG os_brick.initiator.connectors.lightos [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.967 2 DEBUG os_brick.utils [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 11 00:05:58 np0005480824 nova_compute[260089]: 2025-10-11 04:05:58.968 2 DEBUG nova.virt.block_device [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Updating existing volume attachment record: e8e89739-b487-48d8-83ba-b1461ba94337 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 11 00:05:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 167 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 00:05:59 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:05:59 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3457625138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.799 2 DEBUG os_brick.encryptors [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Using volume encryption metadata '{'encryption_key_id': 'c39ef5ee-1c34-457f-9dbd-475d14d08587', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d1c43d0c-6043-453c-89ba-4d660c1c0467', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd1c43d0c-6043-453c-89ba-4d660c1c0467', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '7377f851-2bfd-4f43-9a9f-e08c288708bb', 'attached_at': '', 'detached_at': '', 'volume_id': 'd1c43d0c-6043-453c-89ba-4d660c1c0467', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.805 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.820 2 DEBUG barbicanclient.v1.secrets [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.820 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.847 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.847 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.871 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.871 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.896 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.896 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.927 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.927 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.949 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.949 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.977 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.977 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.994 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:05:59 np0005480824 nova_compute[260089]: 2025-10-11 04:05:59.995 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.091 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.091 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.115 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.115 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.132 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.133 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.154 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.154 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.172 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.173 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.191 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.192 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.213 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.214 2 INFO barbicanclient.base [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/c39ef5ee-1c34-457f-9dbd-475d14d08587#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.231 2 DEBUG barbicanclient.client [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.231 2 DEBUG nova.virt.libvirt.host [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 00:06:00 np0005480824 nova_compute[260089]:  <usage type="volume">
Oct 11 00:06:00 np0005480824 nova_compute[260089]:    <volume>d1c43d0c-6043-453c-89ba-4d660c1c0467</volume>
Oct 11 00:06:00 np0005480824 nova_compute[260089]:  </usage>
Oct 11 00:06:00 np0005480824 nova_compute[260089]: </secret>
Oct 11 00:06:00 np0005480824 nova_compute[260089]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.243 2 DEBUG nova.objects.instance [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lazy-loading 'flavor' on Instance uuid 7377f851-2bfd-4f43-9a9f-e08c288708bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.262 2 DEBUG nova.virt.libvirt.driver [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Attempting to attach volume d1c43d0c-6043-453c-89ba-4d660c1c0467 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct 11 00:06:00 np0005480824 nova_compute[260089]: 2025-10-11 04:06:00.265 2 DEBUG nova.virt.libvirt.guest [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] attach device xml: <disk type="network" device="disk">
Oct 11 00:06:00 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:06:00 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-d1c43d0c-6043-453c-89ba-4d660c1c0467">
Oct 11 00:06:00 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:06:00 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:06:00 np0005480824 nova_compute[260089]:  <auth username="openstack">
Oct 11 00:06:00 np0005480824 nova_compute[260089]:    <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:06:00 np0005480824 nova_compute[260089]:  </auth>
Oct 11 00:06:00 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:06:00 np0005480824 nova_compute[260089]:  <serial>d1c43d0c-6043-453c-89ba-4d660c1c0467</serial>
Oct 11 00:06:00 np0005480824 nova_compute[260089]:  <encryption format="luks">
Oct 11 00:06:00 np0005480824 nova_compute[260089]:    <secret type="passphrase" uuid="388bd12c-2986-44ea-9844-2e77ee80ab8b"/>
Oct 11 00:06:00 np0005480824 nova_compute[260089]:  </encryption>
Oct 11 00:06:00 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:06:00 np0005480824 nova_compute[260089]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 11 00:06:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:06:01 np0005480824 podman[304593]: 2025-10-11 04:06:01.028082875 +0000 UTC m=+0.085217161 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 00:06:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 167 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 00:06:01 np0005480824 nova_compute[260089]: 2025-10-11 04:06:01.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:02 np0005480824 nova_compute[260089]: 2025-10-11 04:06:02.785 2 DEBUG nova.virt.libvirt.driver [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:06:02 np0005480824 nova_compute[260089]: 2025-10-11 04:06:02.786 2 DEBUG nova.virt.libvirt.driver [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:06:02 np0005480824 nova_compute[260089]: 2025-10-11 04:06:02.786 2 DEBUG nova.virt.libvirt.driver [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:06:02 np0005480824 nova_compute[260089]: 2025-10-11 04:06:02.786 2 DEBUG nova.virt.libvirt.driver [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] No VIF found with MAC fa:16:3e:50:e1:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 00:06:02 np0005480824 nova_compute[260089]: 2025-10-11 04:06:02.989 2 DEBUG oslo_concurrency.lockutils [None req-c1fb998b-8593-4659-a077-200c17e343fb f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 167 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct 11 00:06:03 np0005480824 nova_compute[260089]: 2025-10-11 04:06:03.752 2 DEBUG oslo_concurrency.lockutils [None req-e348431d-2fa2-43a7-abf6-2ad204b139c8 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:03 np0005480824 nova_compute[260089]: 2025-10-11 04:06:03.753 2 DEBUG oslo_concurrency.lockutils [None req-e348431d-2fa2-43a7-abf6-2ad204b139c8 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:03 np0005480824 nova_compute[260089]: 2025-10-11 04:06:03.773 2 INFO nova.compute.manager [None req-e348431d-2fa2-43a7-abf6-2ad204b139c8 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Detaching volume d1c43d0c-6043-453c-89ba-4d660c1c0467#033[00m
Oct 11 00:06:03 np0005480824 nova_compute[260089]: 2025-10-11 04:06:03.903 2 INFO nova.virt.block_device [None req-e348431d-2fa2-43a7-abf6-2ad204b139c8 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Attempting to driver detach volume d1c43d0c-6043-453c-89ba-4d660c1c0467 from mountpoint /dev/vdb#033[00m
Oct 11 00:06:04 np0005480824 nova_compute[260089]: 2025-10-11 04:06:04.033 2 DEBUG os_brick.encryptors [None req-e348431d-2fa2-43a7-abf6-2ad204b139c8 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Using volume encryption metadata '{'encryption_key_id': 'c39ef5ee-1c34-457f-9dbd-475d14d08587', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d1c43d0c-6043-453c-89ba-4d660c1c0467', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd1c43d0c-6043-453c-89ba-4d660c1c0467', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '7377f851-2bfd-4f43-9a9f-e08c288708bb', 'attached_at': '', 'detached_at': '', 'volume_id': 'd1c43d0c-6043-453c-89ba-4d660c1c0467', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct 11 00:06:04 np0005480824 nova_compute[260089]: 2025-10-11 04:06:04.039 2 DEBUG nova.virt.libvirt.driver [None req-e348431d-2fa2-43a7-abf6-2ad204b139c8 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Attempting to detach device vdb from instance 7377f851-2bfd-4f43-9a9f-e08c288708bb from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 11 00:06:04 np0005480824 nova_compute[260089]: 2025-10-11 04:06:04.039 2 DEBUG nova.virt.libvirt.guest [None req-e348431d-2fa2-43a7-abf6-2ad204b139c8 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-d1c43d0c-6043-453c-89ba-4d660c1c0467">
Oct 11 00:06:04 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  <serial>d1c43d0c-6043-453c-89ba-4d660c1c0467</serial>
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  <encryption format="luks">
Oct 11 00:06:04 np0005480824 nova_compute[260089]:    <secret type="passphrase" uuid="388bd12c-2986-44ea-9844-2e77ee80ab8b"/>
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  </encryption>
Oct 11 00:06:04 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:06:04 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 00:06:04 np0005480824 nova_compute[260089]: 2025-10-11 04:06:04.047 2 INFO nova.virt.libvirt.driver [None req-e348431d-2fa2-43a7-abf6-2ad204b139c8 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Successfully detached device vdb from instance 7377f851-2bfd-4f43-9a9f-e08c288708bb from the persistent domain config.#033[00m
Oct 11 00:06:04 np0005480824 nova_compute[260089]: 2025-10-11 04:06:04.047 2 DEBUG nova.virt.libvirt.driver [None req-e348431d-2fa2-43a7-abf6-2ad204b139c8 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 7377f851-2bfd-4f43-9a9f-e08c288708bb from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 11 00:06:04 np0005480824 nova_compute[260089]: 2025-10-11 04:06:04.048 2 DEBUG nova.virt.libvirt.guest [None req-e348431d-2fa2-43a7-abf6-2ad204b139c8 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] detach device xml: <disk type="network" device="disk">
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  <source protocol="rbd" name="volumes/volume-d1c43d0c-6043-453c-89ba-4d660c1c0467">
Oct 11 00:06:04 np0005480824 nova_compute[260089]:    <host name="192.168.122.100" port="6789"/>
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  </source>
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  <target dev="vdb" bus="virtio"/>
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  <serial>d1c43d0c-6043-453c-89ba-4d660c1c0467</serial>
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  <encryption format="luks">
Oct 11 00:06:04 np0005480824 nova_compute[260089]:    <secret type="passphrase" uuid="388bd12c-2986-44ea-9844-2e77ee80ab8b"/>
Oct 11 00:06:04 np0005480824 nova_compute[260089]:  </encryption>
Oct 11 00:06:04 np0005480824 nova_compute[260089]: </disk>
Oct 11 00:06:04 np0005480824 nova_compute[260089]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 11 00:06:04 np0005480824 nova_compute[260089]: 2025-10-11 04:06:04.149 2 DEBUG nova.virt.libvirt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Received event <DeviceRemovedEvent: 1760155564.149176, 7377f851-2bfd-4f43-9a9f-e08c288708bb => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 11 00:06:04 np0005480824 nova_compute[260089]: 2025-10-11 04:06:04.150 2 DEBUG nova.virt.libvirt.driver [None req-e348431d-2fa2-43a7-abf6-2ad204b139c8 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 7377f851-2bfd-4f43-9a9f-e08c288708bb _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 11 00:06:04 np0005480824 nova_compute[260089]: 2025-10-11 04:06:04.153 2 INFO nova.virt.libvirt.driver [None req-e348431d-2fa2-43a7-abf6-2ad204b139c8 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Successfully detached device vdb from instance 7377f851-2bfd-4f43-9a9f-e08c288708bb from the live domain config.#033[00m
Oct 11 00:06:04 np0005480824 nova_compute[260089]: 2025-10-11 04:06:04.310 2 DEBUG nova.objects.instance [None req-e348431d-2fa2-43a7-abf6-2ad204b139c8 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lazy-loading 'flavor' on Instance uuid 7377f851-2bfd-4f43-9a9f-e08c288708bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:06:04 np0005480824 nova_compute[260089]: 2025-10-11 04:06:04.360 2 DEBUG oslo_concurrency.lockutils [None req-e348431d-2fa2-43a7-abf6-2ad204b139c8 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 167 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 12 KiB/s wr, 6 op/s
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.420 2 DEBUG oslo_concurrency.lockutils [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.421 2 DEBUG oslo_concurrency.lockutils [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.421 2 DEBUG oslo_concurrency.lockutils [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.422 2 DEBUG oslo_concurrency.lockutils [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.422 2 DEBUG oslo_concurrency.lockutils [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.424 2 INFO nova.compute.manager [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Terminating instance#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.426 2 DEBUG nova.compute.manager [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 00:06:05 np0005480824 kernel: tap2b7dcbb9-ee (unregistering): left promiscuous mode
Oct 11 00:06:05 np0005480824 NetworkManager[44969]: <info>  [1760155565.4820] device (tap2b7dcbb9-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:05 np0005480824 ovn_controller[152667]: 2025-10-11T04:06:05Z|00263|binding|INFO|Releasing lport 2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 from this chassis (sb_readonly=0)
Oct 11 00:06:05 np0005480824 ovn_controller[152667]: 2025-10-11T04:06:05Z|00264|binding|INFO|Setting lport 2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 down in Southbound
Oct 11 00:06:05 np0005480824 ovn_controller[152667]: 2025-10-11T04:06:05Z|00265|binding|INFO|Removing iface tap2b7dcbb9-ee ovn-installed in OVS
Oct 11 00:06:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:05.500 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:e1:bc 10.100.0.13'], port_security=['fa:16:3e:50:e1:bc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7377f851-2bfd-4f43-9a9f-e08c288708bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f367c6c5e8f479399a2004c82cfaff0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17697d08-7b58-4e87-b49c-4e2b77e98db6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b37e59a3-7c4f-47c2-acd9-d3f9dd8c5f52, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:06:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:05.502 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 in datapath abadcf46-9a41-4911-85e0-fbcde2d48b79 unbound from our chassis#033[00m
Oct 11 00:06:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:05.505 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abadcf46-9a41-4911-85e0-fbcde2d48b79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 00:06:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:05.506 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6591ed-5492-42fc-8a54-f26196b946a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:05.507 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 namespace which is not needed anymore#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:05 np0005480824 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct 11 00:06:05 np0005480824 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Consumed 16.153s CPU time.
Oct 11 00:06:05 np0005480824 systemd-machined[215071]: Machine qemu-28-instance-0000001c terminated.
Oct 11 00:06:05 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[303882]: [NOTICE]   (303886) : haproxy version is 2.8.14-c23fe91
Oct 11 00:06:05 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[303882]: [NOTICE]   (303886) : path to executable is /usr/sbin/haproxy
Oct 11 00:06:05 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[303882]: [WARNING]  (303886) : Exiting Master process...
Oct 11 00:06:05 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[303882]: [ALERT]    (303886) : Current worker (303888) exited with code 143 (Terminated)
Oct 11 00:06:05 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[303882]: [WARNING]  (303886) : All workers exited. Exiting... (0)
Oct 11 00:06:05 np0005480824 systemd[1]: libpod-bf2a8641bfee013c7d88abd363906ceb1aec49c85723d404c6639b30f1b9e626.scope: Deactivated successfully.
Oct 11 00:06:05 np0005480824 podman[304639]: 2025-10-11 04:06:05.640812146 +0000 UTC m=+0.041497489 container died bf2a8641bfee013c7d88abd363906ceb1aec49c85723d404c6639b30f1b9e626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.662 2 INFO nova.virt.libvirt.driver [-] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Instance destroyed successfully.#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.663 2 DEBUG nova.objects.instance [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lazy-loading 'resources' on Instance uuid 7377f851-2bfd-4f43-9a9f-e08c288708bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:06:05 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf2a8641bfee013c7d88abd363906ceb1aec49c85723d404c6639b30f1b9e626-userdata-shm.mount: Deactivated successfully.
Oct 11 00:06:05 np0005480824 systemd[1]: var-lib-containers-storage-overlay-dbb8280cb1f25cc337e8d6a53a8cb4082cf31c8913c9cef35e1c80d82ea93e7e-merged.mount: Deactivated successfully.
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.678 2 DEBUG nova.virt.libvirt.vif [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:05:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1013089460',display_name='tempest-TestEncryptedCinderVolumes-server-1013089460',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1013089460',id=28,image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJsDIDfSNAwePEgMuwA7xBN/CNWf2WTBxrS0eNSby1BYvryUdu81T4JHgw4ZgwBLm6Up7P/KY+UdANGn0GVi7gS1LoRSepP4VjwhsAtHrsZWXIIkKv1Uc4r08KMmzbHNA==',key_name='tempest-keypair-1355496260',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:05:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f367c6c5e8f479399a2004c82cfaff0',ramdisk_id='',reservation_id='r-h6oyfuj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7caca022-7dcc-40a9-8bd8-eb7d91b29390',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-781713731',owner_user_name='tempest-TestEncryptedCinderVolumes-781713731-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:05:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f9202e7d8882475ba6a769d9c59c35fd',uuid=7377f851-2bfd-4f43-9a9f-e08c288708bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "address": "fa:16:3e:50:e1:bc", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7dcbb9-ee", "ovs_interfaceid": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.679 2 DEBUG nova.network.os_vif_util [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converting VIF {"id": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "address": "fa:16:3e:50:e1:bc", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b7dcbb9-ee", "ovs_interfaceid": "2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.680 2 DEBUG nova.network.os_vif_util [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:50:e1:bc,bridge_name='br-int',has_traffic_filtering=True,id=2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b7dcbb9-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.681 2 DEBUG os_vif [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:e1:bc,bridge_name='br-int',has_traffic_filtering=True,id=2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b7dcbb9-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b7dcbb9-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:06:05 np0005480824 podman[304639]: 2025-10-11 04:06:05.684756063 +0000 UTC m=+0.085441396 container cleanup bf2a8641bfee013c7d88abd363906ceb1aec49c85723d404c6639b30f1b9e626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.688 2 INFO os_vif [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:e1:bc,bridge_name='br-int',has_traffic_filtering=True,id=2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b7dcbb9-ee')#033[00m
Oct 11 00:06:05 np0005480824 systemd[1]: libpod-conmon-bf2a8641bfee013c7d88abd363906ceb1aec49c85723d404c6639b30f1b9e626.scope: Deactivated successfully.
Oct 11 00:06:05 np0005480824 podman[304691]: 2025-10-11 04:06:05.765113038 +0000 UTC m=+0.051563957 container remove bf2a8641bfee013c7d88abd363906ceb1aec49c85723d404c6639b30f1b9e626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3)
Oct 11 00:06:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:06:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:05.772 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[14acdbe7-468d-4f01-b9a0-9c1b7ea5ae7e]: (4, ('Sat Oct 11 04:06:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 (bf2a8641bfee013c7d88abd363906ceb1aec49c85723d404c6639b30f1b9e626)\nbf2a8641bfee013c7d88abd363906ceb1aec49c85723d404c6639b30f1b9e626\nSat Oct 11 04:06:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 (bf2a8641bfee013c7d88abd363906ceb1aec49c85723d404c6639b30f1b9e626)\nbf2a8641bfee013c7d88abd363906ceb1aec49c85723d404c6639b30f1b9e626\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:05.774 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e7464fec-f285-47d4-a188-9ae06db92020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:05.775 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabadcf46-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:06:05 np0005480824 kernel: tapabadcf46-90: left promiscuous mode
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:05.794 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc8cbf2-fdb4-4758-a795-ee5e0be00df0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:05.817 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[cb56f2fa-0541-497b-ab81-08ff3035f4ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:05.819 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1c12d2f2-b461-4714-8f43-c0a32ea45f38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:05.840 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[1a374102-72bb-4f36-96fd-21a58d4d4ad1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500545, 'reachable_time': 44231, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304712, 'error': None, 'target': 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:05.843 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 00:06:05 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:05.843 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0cfe24-b31b-4bca-818c-568bf9838afd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:05 np0005480824 systemd[1]: run-netns-ovnmeta\x2dabadcf46\x2d9a41\x2d4911\x2d85e0\x2dfbcde2d48b79.mount: Deactivated successfully.
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.930 2 DEBUG nova.compute.manager [req-fa32f0df-635f-4045-aac6-f9e6229a8c2f req-5077ebc2-8beb-491f-a2c5-4396773190cf 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Received event network-vif-unplugged-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.932 2 DEBUG oslo_concurrency.lockutils [req-fa32f0df-635f-4045-aac6-f9e6229a8c2f req-5077ebc2-8beb-491f-a2c5-4396773190cf 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.932 2 DEBUG oslo_concurrency.lockutils [req-fa32f0df-635f-4045-aac6-f9e6229a8c2f req-5077ebc2-8beb-491f-a2c5-4396773190cf 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.933 2 DEBUG oslo_concurrency.lockutils [req-fa32f0df-635f-4045-aac6-f9e6229a8c2f req-5077ebc2-8beb-491f-a2c5-4396773190cf 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.933 2 DEBUG nova.compute.manager [req-fa32f0df-635f-4045-aac6-f9e6229a8c2f req-5077ebc2-8beb-491f-a2c5-4396773190cf 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] No waiting events found dispatching network-vif-unplugged-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:06:05 np0005480824 nova_compute[260089]: 2025-10-11 04:06:05.934 2 DEBUG nova.compute.manager [req-fa32f0df-635f-4045-aac6-f9e6229a8c2f req-5077ebc2-8beb-491f-a2c5-4396773190cf 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Received event network-vif-unplugged-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 00:06:06 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:06.010 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:06:06 np0005480824 nova_compute[260089]: 2025-10-11 04:06:06.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:06 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:06.013 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 00:06:06 np0005480824 nova_compute[260089]: 2025-10-11 04:06:06.151 2 INFO nova.virt.libvirt.driver [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Deleting instance files /var/lib/nova/instances/7377f851-2bfd-4f43-9a9f-e08c288708bb_del#033[00m
Oct 11 00:06:06 np0005480824 nova_compute[260089]: 2025-10-11 04:06:06.153 2 INFO nova.virt.libvirt.driver [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Deletion of /var/lib/nova/instances/7377f851-2bfd-4f43-9a9f-e08c288708bb_del complete#033[00m
Oct 11 00:06:06 np0005480824 nova_compute[260089]: 2025-10-11 04:06:06.201 2 INFO nova.compute.manager [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 00:06:06 np0005480824 nova_compute[260089]: 2025-10-11 04:06:06.202 2 DEBUG oslo.service.loopingcall [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 00:06:06 np0005480824 nova_compute[260089]: 2025-10-11 04:06:06.202 2 DEBUG nova.compute.manager [-] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 00:06:06 np0005480824 nova_compute[260089]: 2025-10-11 04:06:06.203 2 DEBUG nova.network.neutron [-] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 00:06:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 129 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 16 KiB/s wr, 19 op/s
Oct 11 00:06:07 np0005480824 nova_compute[260089]: 2025-10-11 04:06:07.330 2 DEBUG nova.network.neutron [-] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:06:07 np0005480824 nova_compute[260089]: 2025-10-11 04:06:07.350 2 INFO nova.compute.manager [-] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Took 1.15 seconds to deallocate network for instance.#033[00m
Oct 11 00:06:07 np0005480824 nova_compute[260089]: 2025-10-11 04:06:07.395 2 DEBUG oslo_concurrency.lockutils [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:07 np0005480824 nova_compute[260089]: 2025-10-11 04:06:07.395 2 DEBUG oslo_concurrency.lockutils [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:07 np0005480824 nova_compute[260089]: 2025-10-11 04:06:07.473 2 DEBUG oslo_concurrency.processutils [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:06:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:06:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2796971514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:06:07 np0005480824 nova_compute[260089]: 2025-10-11 04:06:07.958 2 DEBUG oslo_concurrency.processutils [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:06:07 np0005480824 nova_compute[260089]: 2025-10-11 04:06:07.968 2 DEBUG nova.compute.provider_tree [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:06:07 np0005480824 nova_compute[260089]: 2025-10-11 04:06:07.987 2 DEBUG nova.scheduler.client.report [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:06:08 np0005480824 nova_compute[260089]: 2025-10-11 04:06:08.029 2 DEBUG oslo_concurrency.lockutils [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:08 np0005480824 nova_compute[260089]: 2025-10-11 04:06:08.068 2 DEBUG nova.compute.manager [req-3a9c2e44-8741-47f3-b602-7bf6087ce7a1 req-e4d53536-3a41-4f8e-bfd1-7d99c9d030bc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Received event network-vif-plugged-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:06:08 np0005480824 nova_compute[260089]: 2025-10-11 04:06:08.069 2 DEBUG oslo_concurrency.lockutils [req-3a9c2e44-8741-47f3-b602-7bf6087ce7a1 req-e4d53536-3a41-4f8e-bfd1-7d99c9d030bc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:08 np0005480824 nova_compute[260089]: 2025-10-11 04:06:08.069 2 DEBUG oslo_concurrency.lockutils [req-3a9c2e44-8741-47f3-b602-7bf6087ce7a1 req-e4d53536-3a41-4f8e-bfd1-7d99c9d030bc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:08 np0005480824 nova_compute[260089]: 2025-10-11 04:06:08.070 2 DEBUG oslo_concurrency.lockutils [req-3a9c2e44-8741-47f3-b602-7bf6087ce7a1 req-e4d53536-3a41-4f8e-bfd1-7d99c9d030bc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:08 np0005480824 nova_compute[260089]: 2025-10-11 04:06:08.070 2 DEBUG nova.compute.manager [req-3a9c2e44-8741-47f3-b602-7bf6087ce7a1 req-e4d53536-3a41-4f8e-bfd1-7d99c9d030bc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] No waiting events found dispatching network-vif-plugged-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:06:08 np0005480824 nova_compute[260089]: 2025-10-11 04:06:08.071 2 WARNING nova.compute.manager [req-3a9c2e44-8741-47f3-b602-7bf6087ce7a1 req-e4d53536-3a41-4f8e-bfd1-7d99c9d030bc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Received unexpected event network-vif-plugged-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 00:06:08 np0005480824 nova_compute[260089]: 2025-10-11 04:06:08.072 2 DEBUG nova.compute.manager [req-3a9c2e44-8741-47f3-b602-7bf6087ce7a1 req-e4d53536-3a41-4f8e-bfd1-7d99c9d030bc 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Received event network-vif-deleted-2b7dcbb9-eeec-47e7-aeb5-7e8752f195c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:06:08 np0005480824 nova_compute[260089]: 2025-10-11 04:06:08.074 2 INFO nova.scheduler.client.report [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Deleted allocations for instance 7377f851-2bfd-4f43-9a9f-e08c288708bb#033[00m
Oct 11 00:06:08 np0005480824 nova_compute[260089]: 2025-10-11 04:06:08.164 2 DEBUG oslo_concurrency.lockutils [None req-b7833dbd-3650-494f-8f2e-d0dba1b16eca f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "7377f851-2bfd-4f43-9a9f-e08c288708bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 17 KiB/s wr, 34 op/s
Oct 11 00:06:10 np0005480824 nova_compute[260089]: 2025-10-11 04:06:10.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:06:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1544612395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:06:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:06:10 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1544612395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:06:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:10.511 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:10.512 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:10.512 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:10 np0005480824 nova_compute[260089]: 2025-10-11 04:06:10.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:06:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 16 KiB/s wr, 34 op/s
Oct 11 00:06:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:06:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1134348316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:06:11 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:06:11 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1134348316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:06:12 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:12.015 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:06:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 17 KiB/s wr, 59 op/s
Oct 11 00:06:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 6.1 KiB/s wr, 53 op/s
Oct 11 00:06:15 np0005480824 nova_compute[260089]: 2025-10-11 04:06:15.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:15 np0005480824 nova_compute[260089]: 2025-10-11 04:06:15.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:06:17 np0005480824 podman[304738]: 2025-10-11 04:06:17.017621916 +0000 UTC m=+0.069265195 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 00:06:17 np0005480824 podman[304737]: 2025-10-11 04:06:17.046500557 +0000 UTC m=+0.091565210 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:06:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 6.1 KiB/s wr, 53 op/s
Oct 11 00:06:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Oct 11 00:06:20 np0005480824 nova_compute[260089]: 2025-10-11 04:06:20.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:20 np0005480824 nova_compute[260089]: 2025-10-11 04:06:20.662 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155565.6604345, 7377f851-2bfd-4f43-9a9f-e08c288708bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:06:20 np0005480824 nova_compute[260089]: 2025-10-11 04:06:20.662 2 INFO nova.compute.manager [-] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] VM Stopped (Lifecycle Event)#033[00m
Oct 11 00:06:20 np0005480824 nova_compute[260089]: 2025-10-11 04:06:20.689 2 DEBUG nova.compute.manager [None req-b6162504-4680-4839-bb05-a3f30bf3d689 - - - - - -] [instance: 7377f851-2bfd-4f43-9a9f-e08c288708bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:06:20 np0005480824 nova_compute[260089]: 2025-10-11 04:06:20.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:06:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 597 B/s wr, 25 op/s
Oct 11 00:06:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 31 op/s
Oct 11 00:06:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:06:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2652605505' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:06:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:06:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2652605505' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:06:25 np0005480824 podman[304775]: 2025-10-11 04:06:25.045770093 +0000 UTC m=+0.111939061 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct 11 00:06:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 00:06:25 np0005480824 nova_compute[260089]: 2025-10-11 04:06:25.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:25 np0005480824 nova_compute[260089]: 2025-10-11 04:06:25.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:06:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 21 KiB/s wr, 6 op/s
Oct 11 00:06:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:06:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:06:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:06:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:06:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:06:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:06:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_04:06:27
Oct 11 00:06:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 00:06:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 11 00:06:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'vms', '.rgw.root', 'default.rgw.meta', '.mgr']
Oct 11 00:06:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 11 00:06:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 00:06:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:06:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 00:06:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:06:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:06:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:06:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:06:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:06:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:06:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:06:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Oct 11 00:06:30 np0005480824 nova_compute[260089]: 2025-10-11 04:06:30.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:30 np0005480824 nova_compute[260089]: 2025-10-11 04:06:30.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:06:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Oct 11 00:06:31 np0005480824 podman[304801]: 2025-10-11 04:06:31.986490939 +0000 UTC m=+0.047740727 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 00:06:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Oct 11 00:06:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 11 00:06:35 np0005480824 nova_compute[260089]: 2025-10-11 04:06:35.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:35 np0005480824 nova_compute[260089]: 2025-10-11 04:06:35.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:06:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 11 00:06:37 np0005480824 ovn_controller[152667]: 2025-10-11T04:06:37Z|00266|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003556628288402683 of space, bias 1.0, pg target 0.10669884865208049 quantized to 32 (current 32)
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:06:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 00:06:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 511 B/s wr, 4 op/s
Oct 11 00:06:40 np0005480824 nova_compute[260089]: 2025-10-11 04:06:40.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:40 np0005480824 nova_compute[260089]: 2025-10-11 04:06:40.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:06:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 11 00:06:41 np0005480824 nova_compute[260089]: 2025-10-11 04:06:41.314 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:06:42 np0005480824 nova_compute[260089]: 2025-10-11 04:06:42.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:06:42 np0005480824 nova_compute[260089]: 2025-10-11 04:06:42.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:06:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1903: 321 pgs: 321 active+clean; 202 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 00:06:44 np0005480824 nova_compute[260089]: 2025-10-11 04:06:44.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:06:44 np0005480824 nova_compute[260089]: 2025-10-11 04:06:44.336 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:44 np0005480824 nova_compute[260089]: 2025-10-11 04:06:44.336 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:44 np0005480824 nova_compute[260089]: 2025-10-11 04:06:44.336 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:44 np0005480824 nova_compute[260089]: 2025-10-11 04:06:44.336 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 00:06:44 np0005480824 nova_compute[260089]: 2025-10-11 04:06:44.337 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:06:44 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:06:44 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2782440772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:06:44 np0005480824 nova_compute[260089]: 2025-10-11 04:06:44.802 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:06:44 np0005480824 nova_compute[260089]: 2025-10-11 04:06:44.965 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:06:44 np0005480824 nova_compute[260089]: 2025-10-11 04:06:44.967 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4342MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 00:06:44 np0005480824 nova_compute[260089]: 2025-10-11 04:06:44.967 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:44 np0005480824 nova_compute[260089]: 2025-10-11 04:06:44.967 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 202 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 00:06:45 np0005480824 nova_compute[260089]: 2025-10-11 04:06:45.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:45 np0005480824 nova_compute[260089]: 2025-10-11 04:06:45.267 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 00:06:45 np0005480824 nova_compute[260089]: 2025-10-11 04:06:45.268 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 00:06:45 np0005480824 nova_compute[260089]: 2025-10-11 04:06:45.284 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing inventories for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 11 00:06:45 np0005480824 nova_compute[260089]: 2025-10-11 04:06:45.316 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updating ProviderTree inventory for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 11 00:06:45 np0005480824 nova_compute[260089]: 2025-10-11 04:06:45.317 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Updating inventory in ProviderTree for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 11 00:06:45 np0005480824 nova_compute[260089]: 2025-10-11 04:06:45.331 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing aggregate associations for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 11 00:06:45 np0005480824 nova_compute[260089]: 2025-10-11 04:06:45.348 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Refreshing trait associations for resource provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_DEVICE_TAGGING,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX,HW_CPU_X86_SHA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 11 00:06:45 np0005480824 nova_compute[260089]: 2025-10-11 04:06:45.370 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:06:45 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 814f6e78-32db-4253-b658-6362ab9a0ab2 does not exist
Oct 11 00:06:45 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev e32795a2-bf9c-4bf9-ba44-28e290a31c1d does not exist
Oct 11 00:06:45 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev bf69f51f-1c0e-4ff3-b513-9fe1b287f563 does not exist
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:06:45 np0005480824 nova_compute[260089]: 2025-10-11 04:06:45.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:06:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718105453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:06:45 np0005480824 nova_compute[260089]: 2025-10-11 04:06:45.922 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:06:45 np0005480824 nova_compute[260089]: 2025-10-11 04:06:45.930 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:06:45 np0005480824 nova_compute[260089]: 2025-10-11 04:06:45.955 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:06:46 np0005480824 nova_compute[260089]: 2025-10-11 04:06:46.000 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 00:06:46 np0005480824 nova_compute[260089]: 2025-10-11 04:06:46.000 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:46 np0005480824 podman[305138]: 2025-10-11 04:06:46.283431593 +0000 UTC m=+0.105962511 container create ff4c1ac1d16516cf120d8f3e39a7b0cb8f2484f3bf38b123c72056f7cfdb6aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:06:46 np0005480824 podman[305138]: 2025-10-11 04:06:46.218626743 +0000 UTC m=+0.041157711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:06:46 np0005480824 systemd[1]: Started libpod-conmon-ff4c1ac1d16516cf120d8f3e39a7b0cb8f2484f3bf38b123c72056f7cfdb6aec.scope.
Oct 11 00:06:46 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:06:46 np0005480824 podman[305138]: 2025-10-11 04:06:46.417362801 +0000 UTC m=+0.239893769 container init ff4c1ac1d16516cf120d8f3e39a7b0cb8f2484f3bf38b123c72056f7cfdb6aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_knuth, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:06:46 np0005480824 podman[305138]: 2025-10-11 04:06:46.424172682 +0000 UTC m=+0.246703590 container start ff4c1ac1d16516cf120d8f3e39a7b0cb8f2484f3bf38b123c72056f7cfdb6aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_knuth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:06:46 np0005480824 boring_knuth[305154]: 167 167
Oct 11 00:06:46 np0005480824 systemd[1]: libpod-ff4c1ac1d16516cf120d8f3e39a7b0cb8f2484f3bf38b123c72056f7cfdb6aec.scope: Deactivated successfully.
Oct 11 00:06:46 np0005480824 podman[305138]: 2025-10-11 04:06:46.501784754 +0000 UTC m=+0.324315702 container attach ff4c1ac1d16516cf120d8f3e39a7b0cb8f2484f3bf38b123c72056f7cfdb6aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_knuth, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 00:06:46 np0005480824 podman[305138]: 2025-10-11 04:06:46.502450169 +0000 UTC m=+0.324981087 container died ff4c1ac1d16516cf120d8f3e39a7b0cb8f2484f3bf38b123c72056f7cfdb6aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_knuth, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 00:06:46 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:06:46 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:06:46 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:06:46 np0005480824 systemd[1]: var-lib-containers-storage-overlay-7c78682dfc1b3bedcbf7db5615511783779ae8c64df334607d9e865f5f5eb129-merged.mount: Deactivated successfully.
Oct 11 00:06:46 np0005480824 podman[305138]: 2025-10-11 04:06:46.717132843 +0000 UTC m=+0.539663761 container remove ff4c1ac1d16516cf120d8f3e39a7b0cb8f2484f3bf38b123c72056f7cfdb6aec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_knuth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:06:46 np0005480824 systemd[1]: libpod-conmon-ff4c1ac1d16516cf120d8f3e39a7b0cb8f2484f3bf38b123c72056f7cfdb6aec.scope: Deactivated successfully.
Oct 11 00:06:46 np0005480824 podman[305178]: 2025-10-11 04:06:46.891150758 +0000 UTC m=+0.038600252 container create d7e309a88b55616dcf9f772cdb66fa1b94f971f5ef354551377fd2b436859b59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:06:46 np0005480824 systemd[1]: Started libpod-conmon-d7e309a88b55616dcf9f772cdb66fa1b94f971f5ef354551377fd2b436859b59.scope.
Oct 11 00:06:46 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:06:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/549d9e332c033e4406e57771df470cf75294d7ba72099e6ced741fe5ea573e86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:06:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/549d9e332c033e4406e57771df470cf75294d7ba72099e6ced741fe5ea573e86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:06:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/549d9e332c033e4406e57771df470cf75294d7ba72099e6ced741fe5ea573e86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:06:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/549d9e332c033e4406e57771df470cf75294d7ba72099e6ced741fe5ea573e86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:06:46 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/549d9e332c033e4406e57771df470cf75294d7ba72099e6ced741fe5ea573e86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 00:06:46 np0005480824 podman[305178]: 2025-10-11 04:06:46.875725214 +0000 UTC m=+0.023174728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:06:46 np0005480824 podman[305178]: 2025-10-11 04:06:46.982096464 +0000 UTC m=+0.129545968 container init d7e309a88b55616dcf9f772cdb66fa1b94f971f5ef354551377fd2b436859b59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:06:46 np0005480824 podman[305178]: 2025-10-11 04:06:46.989729703 +0000 UTC m=+0.137179197 container start d7e309a88b55616dcf9f772cdb66fa1b94f971f5ef354551377fd2b436859b59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:06:46 np0005480824 podman[305178]: 2025-10-11 04:06:46.992524359 +0000 UTC m=+0.139973853 container attach d7e309a88b55616dcf9f772cdb66fa1b94f971f5ef354551377fd2b436859b59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_newton, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:06:47 np0005480824 nova_compute[260089]: 2025-10-11 04:06:47.002 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:06:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 202 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 00:06:47 np0005480824 nova_compute[260089]: 2025-10-11 04:06:47.299 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:06:47 np0005480824 nova_compute[260089]: 2025-10-11 04:06:47.300 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 00:06:47 np0005480824 nova_compute[260089]: 2025-10-11 04:06:47.301 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 00:06:47 np0005480824 nova_compute[260089]: 2025-10-11 04:06:47.318 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 00:06:48 np0005480824 laughing_newton[305194]: --> passed data devices: 0 physical, 3 LVM
Oct 11 00:06:48 np0005480824 laughing_newton[305194]: --> relative data size: 1.0
Oct 11 00:06:48 np0005480824 laughing_newton[305194]: --> All data devices are unavailable
Oct 11 00:06:48 np0005480824 podman[305220]: 2025-10-11 04:06:48.011490696 +0000 UTC m=+0.072502431 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 00:06:48 np0005480824 podman[305221]: 2025-10-11 04:06:48.031288543 +0000 UTC m=+0.087918925 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 11 00:06:48 np0005480824 systemd[1]: libpod-d7e309a88b55616dcf9f772cdb66fa1b94f971f5ef354551377fd2b436859b59.scope: Deactivated successfully.
Oct 11 00:06:48 np0005480824 podman[305178]: 2025-10-11 04:06:48.034850638 +0000 UTC m=+1.182300122 container died d7e309a88b55616dcf9f772cdb66fa1b94f971f5ef354551377fd2b436859b59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_newton, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 00:06:48 np0005480824 systemd[1]: var-lib-containers-storage-overlay-549d9e332c033e4406e57771df470cf75294d7ba72099e6ced741fe5ea573e86-merged.mount: Deactivated successfully.
Oct 11 00:06:48 np0005480824 podman[305178]: 2025-10-11 04:06:48.112245933 +0000 UTC m=+1.259695427 container remove d7e309a88b55616dcf9f772cdb66fa1b94f971f5ef354551377fd2b436859b59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 00:06:48 np0005480824 systemd[1]: libpod-conmon-d7e309a88b55616dcf9f772cdb66fa1b94f971f5ef354551377fd2b436859b59.scope: Deactivated successfully.
Oct 11 00:06:48 np0005480824 nova_compute[260089]: 2025-10-11 04:06:48.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:06:48 np0005480824 nova_compute[260089]: 2025-10-11 04:06:48.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:06:48 np0005480824 nova_compute[260089]: 2025-10-11 04:06:48.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 00:06:48 np0005480824 podman[305412]: 2025-10-11 04:06:48.740034232 +0000 UTC m=+0.045976836 container create b69cba32483b9ff14061ec9a08337fdaff05f0a1df9c4f281c9fa39475070e0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 00:06:48 np0005480824 systemd[1]: Started libpod-conmon-b69cba32483b9ff14061ec9a08337fdaff05f0a1df9c4f281c9fa39475070e0d.scope.
Oct 11 00:06:48 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:06:48 np0005480824 podman[305412]: 2025-10-11 04:06:48.720120072 +0000 UTC m=+0.026062756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:06:48 np0005480824 podman[305412]: 2025-10-11 04:06:48.813553706 +0000 UTC m=+0.119496310 container init b69cba32483b9ff14061ec9a08337fdaff05f0a1df9c4f281c9fa39475070e0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:06:48 np0005480824 podman[305412]: 2025-10-11 04:06:48.822549808 +0000 UTC m=+0.128492412 container start b69cba32483b9ff14061ec9a08337fdaff05f0a1df9c4f281c9fa39475070e0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:06:48 np0005480824 podman[305412]: 2025-10-11 04:06:48.826691966 +0000 UTC m=+0.132634610 container attach b69cba32483b9ff14061ec9a08337fdaff05f0a1df9c4f281c9fa39475070e0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 00:06:48 np0005480824 xenodochial_kapitsa[305428]: 167 167
Oct 11 00:06:48 np0005480824 systemd[1]: libpod-b69cba32483b9ff14061ec9a08337fdaff05f0a1df9c4f281c9fa39475070e0d.scope: Deactivated successfully.
Oct 11 00:06:48 np0005480824 podman[305412]: 2025-10-11 04:06:48.830431424 +0000 UTC m=+0.136374038 container died b69cba32483b9ff14061ec9a08337fdaff05f0a1df9c4f281c9fa39475070e0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:06:48 np0005480824 systemd[1]: var-lib-containers-storage-overlay-adcdf2fbabce05eea12f025b6c92ea9e88a77dada54d5ea7b0f0815d4582660c-merged.mount: Deactivated successfully.
Oct 11 00:06:48 np0005480824 podman[305412]: 2025-10-11 04:06:48.873029259 +0000 UTC m=+0.178971873 container remove b69cba32483b9ff14061ec9a08337fdaff05f0a1df9c4f281c9fa39475070e0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 00:06:48 np0005480824 systemd[1]: libpod-conmon-b69cba32483b9ff14061ec9a08337fdaff05f0a1df9c4f281c9fa39475070e0d.scope: Deactivated successfully.
Oct 11 00:06:49 np0005480824 podman[305456]: 2025-10-11 04:06:49.052944243 +0000 UTC m=+0.044604063 container create 665ac9e95a6b6448d8a2251f750d485ea25d3c5258c3f013be4acb63450b4305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.071 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "52296433-4344-4796-825b-6405fe5eae5d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.072 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 202 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 00:06:49 np0005480824 systemd[1]: Started libpod-conmon-665ac9e95a6b6448d8a2251f750d485ea25d3c5258c3f013be4acb63450b4305.scope.
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.098 2 DEBUG nova.compute.manager [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 00:06:49 np0005480824 podman[305456]: 2025-10-11 04:06:49.030487944 +0000 UTC m=+0.022147764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:06:49 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:06:49 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4787f7244e1653b1e34cc50ed389e17baefd495493cec484c8c2f6032af850b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:06:49 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4787f7244e1653b1e34cc50ed389e17baefd495493cec484c8c2f6032af850b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:06:49 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4787f7244e1653b1e34cc50ed389e17baefd495493cec484c8c2f6032af850b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:06:49 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4787f7244e1653b1e34cc50ed389e17baefd495493cec484c8c2f6032af850b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:06:49 np0005480824 podman[305456]: 2025-10-11 04:06:49.15754648 +0000 UTC m=+0.149206280 container init 665ac9e95a6b6448d8a2251f750d485ea25d3c5258c3f013be4acb63450b4305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 00:06:49 np0005480824 podman[305456]: 2025-10-11 04:06:49.169539273 +0000 UTC m=+0.161199073 container start 665ac9e95a6b6448d8a2251f750d485ea25d3c5258c3f013be4acb63450b4305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.171 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.171 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:49 np0005480824 podman[305456]: 2025-10-11 04:06:49.173083418 +0000 UTC m=+0.164743228 container attach 665ac9e95a6b6448d8a2251f750d485ea25d3c5258c3f013be4acb63450b4305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.179 2 DEBUG nova.virt.hardware [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.179 2 INFO nova.compute.claims [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.275 2 DEBUG oslo_concurrency.processutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:06:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:06:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/879283460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.691 2 DEBUG oslo_concurrency.processutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.697 2 DEBUG nova.compute.provider_tree [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.718 2 DEBUG nova.scheduler.client.report [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.747 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.747 2 DEBUG nova.compute.manager [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.818 2 DEBUG nova.compute.manager [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.819 2 DEBUG nova.network.neutron [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.846 2 INFO nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.876 2 DEBUG nova.compute.manager [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]: {
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:    "0": [
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:        {
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "devices": [
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "/dev/loop3"
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            ],
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_name": "ceph_lv0",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_size": "21470642176",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "name": "ceph_lv0",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "tags": {
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.cluster_name": "ceph",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.crush_device_class": "",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.encrypted": "0",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.osd_id": "0",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.type": "block",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.vdo": "0"
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            },
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "type": "block",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "vg_name": "ceph_vg0"
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:        }
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:    ],
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:    "1": [
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:        {
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "devices": [
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "/dev/loop4"
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            ],
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_name": "ceph_lv1",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_size": "21470642176",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "name": "ceph_lv1",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "tags": {
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.cluster_name": "ceph",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.crush_device_class": "",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.encrypted": "0",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.osd_id": "1",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.type": "block",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.vdo": "0"
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            },
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "type": "block",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "vg_name": "ceph_vg1"
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:        }
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:    ],
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:    "2": [
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:        {
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "devices": [
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "/dev/loop5"
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            ],
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_name": "ceph_lv2",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_size": "21470642176",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "name": "ceph_lv2",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "tags": {
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.cluster_name": "ceph",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.crush_device_class": "",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.encrypted": "0",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.osd_id": "2",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.type": "block",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:                "ceph.vdo": "0"
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            },
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "type": "block",
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:            "vg_name": "ceph_vg2"
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:        }
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]:    ]
Oct 11 00:06:49 np0005480824 agitated_swirles[305473]: }
Oct 11 00:06:49 np0005480824 systemd[1]: libpod-665ac9e95a6b6448d8a2251f750d485ea25d3c5258c3f013be4acb63450b4305.scope: Deactivated successfully.
Oct 11 00:06:49 np0005480824 conmon[305473]: conmon 665ac9e95a6b6448d8a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-665ac9e95a6b6448d8a2251f750d485ea25d3c5258c3f013be4acb63450b4305.scope/container/memory.events
Oct 11 00:06:49 np0005480824 podman[305456]: 2025-10-11 04:06:49.920083348 +0000 UTC m=+0.911743148 container died 665ac9e95a6b6448d8a2251f750d485ea25d3c5258c3f013be4acb63450b4305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 11 00:06:49 np0005480824 nova_compute[260089]: 2025-10-11 04:06:49.936 2 INFO nova.virt.block_device [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Booting with volume d2b26540-d391-41fc-aff8-74d0350e04c9 at /dev/vda#033[00m
Oct 11 00:06:49 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a4787f7244e1653b1e34cc50ed389e17baefd495493cec484c8c2f6032af850b-merged.mount: Deactivated successfully.
Oct 11 00:06:49 np0005480824 podman[305456]: 2025-10-11 04:06:49.969045863 +0000 UTC m=+0.960705663 container remove 665ac9e95a6b6448d8a2251f750d485ea25d3c5258c3f013be4acb63450b4305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 00:06:49 np0005480824 systemd[1]: libpod-conmon-665ac9e95a6b6448d8a2251f750d485ea25d3c5258c3f013be4acb63450b4305.scope: Deactivated successfully.
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.072 2 DEBUG os_brick.utils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.074 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.089 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.090 676 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbefc06-6203-4eed-927f-aca4d0d635f7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.091 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.099 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.099 676 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2d63d1-bdfa-40d8-99c0-1d49942f0125]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.100 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.108 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.109 676 DEBUG oslo.privsep.daemon [-] privsep: reply[05cd9f73-3db8-44cd-bf59-bd67a4cf6f31]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.110 676 DEBUG oslo.privsep.daemon [-] privsep: reply[bf08d38e-1942-43c1-8a99-036738c51684]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.110 2 DEBUG oslo_concurrency.processutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.132 2 DEBUG oslo_concurrency.processutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.134 2 DEBUG os_brick.initiator.connectors.lightos [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.135 2 DEBUG os_brick.initiator.connectors.lightos [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.135 2 DEBUG os_brick.initiator.connectors.lightos [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.135 2 DEBUG os_brick.utils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.136 2 DEBUG nova.virt.block_device [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Updating existing volume attachment record: ae1d9bf4-03a1-4589-9aa5-ee42700f765e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:50 np0005480824 podman[305664]: 2025-10-11 04:06:50.547360585 +0000 UTC m=+0.057046697 container create 8f66d620f69ba5f37e72492323aeb469f32f65fad51b827bfb7c5a8978687282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 00:06:50 np0005480824 systemd[1]: Started libpod-conmon-8f66d620f69ba5f37e72492323aeb469f32f65fad51b827bfb7c5a8978687282.scope.
Oct 11 00:06:50 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:06:50 np0005480824 podman[305664]: 2025-10-11 04:06:50.525128621 +0000 UTC m=+0.034814693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:06:50 np0005480824 podman[305664]: 2025-10-11 04:06:50.621523955 +0000 UTC m=+0.131209967 container init 8f66d620f69ba5f37e72492323aeb469f32f65fad51b827bfb7c5a8978687282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_haslett, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:06:50 np0005480824 podman[305664]: 2025-10-11 04:06:50.627116077 +0000 UTC m=+0.136802059 container start 8f66d620f69ba5f37e72492323aeb469f32f65fad51b827bfb7c5a8978687282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_haslett, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:06:50 np0005480824 podman[305664]: 2025-10-11 04:06:50.630412914 +0000 UTC m=+0.140098896 container attach 8f66d620f69ba5f37e72492323aeb469f32f65fad51b827bfb7c5a8978687282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_haslett, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:06:50 np0005480824 epic_haslett[305680]: 167 167
Oct 11 00:06:50 np0005480824 systemd[1]: libpod-8f66d620f69ba5f37e72492323aeb469f32f65fad51b827bfb7c5a8978687282.scope: Deactivated successfully.
Oct 11 00:06:50 np0005480824 podman[305664]: 2025-10-11 04:06:50.633710102 +0000 UTC m=+0.143396124 container died 8f66d620f69ba5f37e72492323aeb469f32f65fad51b827bfb7c5a8978687282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_haslett, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:06:50 np0005480824 systemd[1]: var-lib-containers-storage-overlay-962d5d037ddce6d57633fad9c2b508f08f78a0266787e1f6120d682c4fda142f-merged.mount: Deactivated successfully.
Oct 11 00:06:50 np0005480824 podman[305664]: 2025-10-11 04:06:50.683961948 +0000 UTC m=+0.193647970 container remove 8f66d620f69ba5f37e72492323aeb469f32f65fad51b827bfb7c5a8978687282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 00:06:50 np0005480824 systemd[1]: libpod-conmon-8f66d620f69ba5f37e72492323aeb469f32f65fad51b827bfb7c5a8978687282.scope: Deactivated successfully.
Oct 11 00:06:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:06:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/946054408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:06:50 np0005480824 nova_compute[260089]: 2025-10-11 04:06:50.859 2 DEBUG nova.policy [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9202e7d8882475ba6a769d9c59c35fd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f367c6c5e8f479399a2004c82cfaff0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 00:06:50 np0005480824 podman[305706]: 2025-10-11 04:06:50.869669898 +0000 UTC m=+0.061388959 container create 16721b7926cbe411221795d209f9bbe26e43ebbbe4a933290ee9ebbfecc41996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:06:50 np0005480824 systemd[1]: Started libpod-conmon-16721b7926cbe411221795d209f9bbe26e43ebbbe4a933290ee9ebbfecc41996.scope.
Oct 11 00:06:50 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:06:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7359eb5838eaa471cb27320e4d722cb206eb02342aeb067220bdf2500ec02c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:06:50 np0005480824 podman[305706]: 2025-10-11 04:06:50.849733048 +0000 UTC m=+0.041452089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:06:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7359eb5838eaa471cb27320e4d722cb206eb02342aeb067220bdf2500ec02c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:06:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7359eb5838eaa471cb27320e4d722cb206eb02342aeb067220bdf2500ec02c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:06:50 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7359eb5838eaa471cb27320e4d722cb206eb02342aeb067220bdf2500ec02c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:06:50 np0005480824 podman[305706]: 2025-10-11 04:06:50.956128858 +0000 UTC m=+0.147847929 container init 16721b7926cbe411221795d209f9bbe26e43ebbbe4a933290ee9ebbfecc41996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendeleev, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 00:06:50 np0005480824 podman[305706]: 2025-10-11 04:06:50.96343313 +0000 UTC m=+0.155152151 container start 16721b7926cbe411221795d209f9bbe26e43ebbbe4a933290ee9ebbfecc41996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 00:06:50 np0005480824 podman[305706]: 2025-10-11 04:06:50.966239386 +0000 UTC m=+0.157958457 container attach 16721b7926cbe411221795d209f9bbe26e43ebbbe4a933290ee9ebbfecc41996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 00:06:51 np0005480824 nova_compute[260089]: 2025-10-11 04:06:51.057 2 DEBUG nova.compute.manager [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 00:06:51 np0005480824 nova_compute[260089]: 2025-10-11 04:06:51.058 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 00:06:51 np0005480824 nova_compute[260089]: 2025-10-11 04:06:51.059 2 INFO nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Creating image(s)#033[00m
Oct 11 00:06:51 np0005480824 nova_compute[260089]: 2025-10-11 04:06:51.059 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 11 00:06:51 np0005480824 nova_compute[260089]: 2025-10-11 04:06:51.059 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Ensure instance console log exists: /var/lib/nova/instances/52296433-4344-4796-825b-6405fe5eae5d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 00:06:51 np0005480824 nova_compute[260089]: 2025-10-11 04:06:51.060 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:51 np0005480824 nova_compute[260089]: 2025-10-11 04:06:51.060 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:51 np0005480824 nova_compute[260089]: 2025-10-11 04:06:51.060 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 202 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 00:06:51 np0005480824 nova_compute[260089]: 2025-10-11 04:06:51.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:06:51 np0005480824 nova_compute[260089]: 2025-10-11 04:06:51.712 2 DEBUG nova.network.neutron [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Successfully created port: 7452b5ba-837b-463f-9388-b4139a5e9f4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]: {
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "osd_id": 0,
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "type": "bluestore"
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:    },
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "osd_id": 1,
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "type": "bluestore"
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:    },
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "osd_id": 2,
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:        "type": "bluestore"
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]:    }
Oct 11 00:06:52 np0005480824 hardcore_mendeleev[305723]: }
Oct 11 00:06:52 np0005480824 systemd[1]: libpod-16721b7926cbe411221795d209f9bbe26e43ebbbe4a933290ee9ebbfecc41996.scope: Deactivated successfully.
Oct 11 00:06:52 np0005480824 systemd[1]: libpod-16721b7926cbe411221795d209f9bbe26e43ebbbe4a933290ee9ebbfecc41996.scope: Consumed 1.077s CPU time.
Oct 11 00:06:52 np0005480824 podman[305756]: 2025-10-11 04:06:52.070569537 +0000 UTC m=+0.022987303 container died 16721b7926cbe411221795d209f9bbe26e43ebbbe4a933290ee9ebbfecc41996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:06:52 np0005480824 systemd[1]: var-lib-containers-storage-overlay-b7359eb5838eaa471cb27320e4d722cb206eb02342aeb067220bdf2500ec02c1-merged.mount: Deactivated successfully.
Oct 11 00:06:52 np0005480824 podman[305756]: 2025-10-11 04:06:52.14571441 +0000 UTC m=+0.098132176 container remove 16721b7926cbe411221795d209f9bbe26e43ebbbe4a933290ee9ebbfecc41996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 00:06:52 np0005480824 systemd[1]: libpod-conmon-16721b7926cbe411221795d209f9bbe26e43ebbbe4a933290ee9ebbfecc41996.scope: Deactivated successfully.
Oct 11 00:06:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 00:06:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:06:52 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 00:06:52 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:06:52 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev b762e7d1-94b6-4f22-8114-6b88b42ae219 does not exist
Oct 11 00:06:52 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 7e059ce8-b8de-4334-8f7e-5f260d5c32e5 does not exist
Oct 11 00:06:52 np0005480824 nova_compute[260089]: 2025-10-11 04:06:52.480 2 DEBUG nova.network.neutron [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Successfully updated port: 7452b5ba-837b-463f-9388-b4139a5e9f4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 00:06:52 np0005480824 nova_compute[260089]: 2025-10-11 04:06:52.508 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "refresh_cache-52296433-4344-4796-825b-6405fe5eae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:06:52 np0005480824 nova_compute[260089]: 2025-10-11 04:06:52.508 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquired lock "refresh_cache-52296433-4344-4796-825b-6405fe5eae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:06:52 np0005480824 nova_compute[260089]: 2025-10-11 04:06:52.508 2 DEBUG nova.network.neutron [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 00:06:52 np0005480824 nova_compute[260089]: 2025-10-11 04:06:52.588 2 DEBUG nova.compute.manager [req-de70c5ba-b95d-4d0e-bd02-5aaca1f0c164 req-962328cd-d2e4-41b1-9378-61c5557d9df5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Received event network-changed-7452b5ba-837b-463f-9388-b4139a5e9f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:06:52 np0005480824 nova_compute[260089]: 2025-10-11 04:06:52.589 2 DEBUG nova.compute.manager [req-de70c5ba-b95d-4d0e-bd02-5aaca1f0c164 req-962328cd-d2e4-41b1-9378-61c5557d9df5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Refreshing instance network info cache due to event network-changed-7452b5ba-837b-463f-9388-b4139a5e9f4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 00:06:52 np0005480824 nova_compute[260089]: 2025-10-11 04:06:52.589 2 DEBUG oslo_concurrency.lockutils [req-de70c5ba-b95d-4d0e-bd02-5aaca1f0c164 req-962328cd-d2e4-41b1-9378-61c5557d9df5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-52296433-4344-4796-825b-6405fe5eae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:06:52 np0005480824 nova_compute[260089]: 2025-10-11 04:06:52.623 2 DEBUG nova.network.neutron [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 00:06:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 202 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Oct 11 00:06:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:06:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.856 2 DEBUG nova.network.neutron [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Updating instance_info_cache with network_info: [{"id": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "address": "fa:16:3e:dd:3b:81", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7452b5ba-83", "ovs_interfaceid": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.873 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Releasing lock "refresh_cache-52296433-4344-4796-825b-6405fe5eae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.874 2 DEBUG nova.compute.manager [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Instance network_info: |[{"id": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "address": "fa:16:3e:dd:3b:81", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7452b5ba-83", "ovs_interfaceid": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.874 2 DEBUG oslo_concurrency.lockutils [req-de70c5ba-b95d-4d0e-bd02-5aaca1f0c164 req-962328cd-d2e4-41b1-9378-61c5557d9df5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-52296433-4344-4796-825b-6405fe5eae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.874 2 DEBUG nova.network.neutron [req-de70c5ba-b95d-4d0e-bd02-5aaca1f0c164 req-962328cd-d2e4-41b1-9378-61c5557d9df5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Refreshing network info cache for port 7452b5ba-837b-463f-9388-b4139a5e9f4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.877 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Start _get_guest_xml network_info=[{"id": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "address": "fa:16:3e:dd:3b:81", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7452b5ba-83", "ovs_interfaceid": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'ae1d9bf4-03a1-4589-9aa5-ee42700f765e', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d2b26540-d391-41fc-aff8-74d0350e04c9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd2b26540-d391-41fc-aff8-74d0350e04c9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '52296433-4344-4796-825b-6405fe5eae5d', 'attached_at': '', 'detached_at': '', 'volume_id': 'd2b26540-d391-41fc-aff8-74d0350e04c9', 'serial': 'd2b26540-d391-41fc-aff8-74d0350e04c9'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.882 2 WARNING nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.889 2 DEBUG nova.virt.libvirt.host [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.889 2 DEBUG nova.virt.libvirt.host [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.896 2 DEBUG nova.virt.libvirt.host [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.896 2 DEBUG nova.virt.libvirt.host [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.897 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.897 2 DEBUG nova.virt.hardware [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.897 2 DEBUG nova.virt.hardware [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.898 2 DEBUG nova.virt.hardware [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.898 2 DEBUG nova.virt.hardware [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.898 2 DEBUG nova.virt.hardware [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.899 2 DEBUG nova.virt.hardware [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.899 2 DEBUG nova.virt.hardware [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.899 2 DEBUG nova.virt.hardware [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.899 2 DEBUG nova.virt.hardware [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.900 2 DEBUG nova.virt.hardware [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.900 2 DEBUG nova.virt.hardware [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.924 2 DEBUG nova.storage.rbd_utils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] rbd image 52296433-4344-4796-825b-6405fe5eae5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:06:53 np0005480824 nova_compute[260089]: 2025-10-11 04:06:53.927 2 DEBUG oslo_concurrency.processutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:06:54 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:06:54 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1218314938' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.432 2 DEBUG oslo_concurrency.processutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.582 2 DEBUG os_brick.encryptors [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Using volume encryption metadata '{'encryption_key_id': '53379318-e944-46e1-9db7-bde32a1b9da5', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d2b26540-d391-41fc-aff8-74d0350e04c9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd2b26540-d391-41fc-aff8-74d0350e04c9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '52296433-4344-4796-825b-6405fe5eae5d', 'attached_at': '', 'detached_at': '', 'volume_id': 'd2b26540-d391-41fc-aff8-74d0350e04c9', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.585 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.602 2 DEBUG barbicanclient.v1.secrets [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/53379318-e944-46e1-9db7-bde32a1b9da5 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.602 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.629 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.629 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.723 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.724 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.744 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.744 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.767 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.767 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.800 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.800 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.851 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.851 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.881 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.881 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.920 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:54 np0005480824 nova_compute[260089]: 2025-10-11 04:06:54.920 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.014 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.015 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.032 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.033 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.060 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.061 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.077 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.078 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 202 MiB data, 591 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.106 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.107 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.123 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.124 2 INFO barbicanclient.base [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/53379318-e944-46e1-9db7-bde32a1b9da5#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.141 2 DEBUG barbicanclient.client [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.141 2 DEBUG nova.virt.libvirt.host [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  <usage type="volume">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <volume>d2b26540-d391-41fc-aff8-74d0350e04c9</volume>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  </usage>
Oct 11 00:06:55 np0005480824 nova_compute[260089]: </secret>
Oct 11 00:06:55 np0005480824 nova_compute[260089]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.173 2 DEBUG nova.virt.libvirt.vif [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:06:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-771287187',display_name='tempest-TestEncryptedCinderVolumes-server-771287187',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-771287187',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNT5nRmfgUQpJQihppMhJ/PJtl2PXt4LF+4fCTR7CvYlNKAHH53rCj1YReitA5DOkjFToqvFLFWF74Q9GO2rD7zoT+ufORFGj1sd+RhwvHNqWv6rQH+IM1H5SH+IwmGWpQ==',key_name='tempest-TestEncryptedCinderVolumes-517148477',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f367c6c5e8f479399a2004c82cfaff0',ramdisk_id='',reservation_id='r-r2rls36l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-781713731',owner_user_name='tempest-TestEncryptedCinderVolumes-781713731-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:06:49Z,user_data=None,user_id='f9202e7d8882475ba6a769d9c59c35fd',uuid=52296433-4344-4796-825b-6405fe5eae5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "address": "fa:16:3e:dd:3b:81", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7452b5ba-83", "ovs_interfaceid": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.174 2 DEBUG nova.network.os_vif_util [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converting VIF {"id": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "address": "fa:16:3e:dd:3b:81", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7452b5ba-83", "ovs_interfaceid": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.176 2 DEBUG nova.network.os_vif_util [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:3b:81,bridge_name='br-int',has_traffic_filtering=True,id=7452b5ba-837b-463f-9388-b4139a5e9f4f,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7452b5ba-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.177 2 DEBUG nova.objects.instance [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52296433-4344-4796-825b-6405fe5eae5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.195 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  <uuid>52296433-4344-4796-825b-6405fe5eae5d</uuid>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  <name>instance-0000001d</name>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  <metadata>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-771287187</nova:name>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 04:06:53</nova:creationTime>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:        <nova:user uuid="f9202e7d8882475ba6a769d9c59c35fd">tempest-TestEncryptedCinderVolumes-781713731-project-member</nova:user>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:        <nova:project uuid="6f367c6c5e8f479399a2004c82cfaff0">tempest-TestEncryptedCinderVolumes-781713731</nova:project>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:        <nova:port uuid="7452b5ba-837b-463f-9388-b4139a5e9f4f">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:        </nova:port>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  </metadata>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <system>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <entry name="serial">52296433-4344-4796-825b-6405fe5eae5d</entry>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <entry name="uuid">52296433-4344-4796-825b-6405fe5eae5d</entry>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    </system>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  <os>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  </os>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  <features>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <acpi/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <apic/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  </features>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  </clock>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  </cpu>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  <devices>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/52296433-4344-4796-825b-6405fe5eae5d_disk.config">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      </source>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      </auth>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    </disk>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-d2b26540-d391-41fc-aff8-74d0350e04c9">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      </source>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      </auth>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <serial>d2b26540-d391-41fc-aff8-74d0350e04c9</serial>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <encryption format="luks">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:        <secret type="passphrase" uuid="89f5897d-8807-4311-a5c4-259b453698a3"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      </encryption>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    </disk>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:dd:3b:81"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <target dev="tap7452b5ba-83"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    </interface>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/52296433-4344-4796-825b-6405fe5eae5d/console.log" append="off"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    </serial>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <video>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    </video>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    </rng>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 11 00:06:55 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:    </memballoon>
Oct 11 00:06:55 np0005480824 nova_compute[260089]:  </devices>
Oct 11 00:06:55 np0005480824 nova_compute[260089]: </domain>
Oct 11 00:06:55 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.196 2 DEBUG nova.compute.manager [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Preparing to wait for external event network-vif-plugged-7452b5ba-837b-463f-9388-b4139a5e9f4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.197 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "52296433-4344-4796-825b-6405fe5eae5d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.197 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.197 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.198 2 DEBUG nova.virt.libvirt.vif [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:06:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-771287187',display_name='tempest-TestEncryptedCinderVolumes-server-771287187',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-771287187',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNT5nRmfgUQpJQihppMhJ/PJtl2PXt4LF+4fCTR7CvYlNKAHH53rCj1YReitA5DOkjFToqvFLFWF74Q9GO2rD7zoT+ufORFGj1sd+RhwvHNqWv6rQH+IM1H5SH+IwmGWpQ==',key_name='tempest-TestEncryptedCinderVolumes-517148477',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f367c6c5e8f479399a2004c82cfaff0',ramdisk_id='',reservation_id='r-r2rls36l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-781713731',owner_user_name='tempest-TestEncryptedCinderVolumes-781713731-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:06:49Z,user_data=None,user_id='f9202e7d8882475ba6a769d9c59c35fd',uuid=52296433-4344-4796-825b-6405fe5eae5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "address": "fa:16:3e:dd:3b:81", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7452b5ba-83", "ovs_interfaceid": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.198 2 DEBUG nova.network.os_vif_util [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converting VIF {"id": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "address": "fa:16:3e:dd:3b:81", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7452b5ba-83", "ovs_interfaceid": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.199 2 DEBUG nova.network.os_vif_util [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:3b:81,bridge_name='br-int',has_traffic_filtering=True,id=7452b5ba-837b-463f-9388-b4139a5e9f4f,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7452b5ba-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.199 2 DEBUG os_vif [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:3b:81,bridge_name='br-int',has_traffic_filtering=True,id=7452b5ba-837b-463f-9388-b4139a5e9f4f,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7452b5ba-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7452b5ba-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7452b5ba-83, col_values=(('external_ids', {'iface-id': '7452b5ba-837b-463f-9388-b4139a5e9f4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:3b:81', 'vm-uuid': '52296433-4344-4796-825b-6405fe5eae5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:55 np0005480824 NetworkManager[44969]: <info>  [1760155615.2285] manager: (tap7452b5ba-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.234 2 INFO os_vif [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:3b:81,bridge_name='br-int',has_traffic_filtering=True,id=7452b5ba-837b-463f-9388-b4139a5e9f4f,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7452b5ba-83')#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.302 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.303 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.303 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] No VIF found with MAC fa:16:3e:dd:3b:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.304 2 INFO nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Using config drive#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.330 2 DEBUG nova.storage.rbd_utils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] rbd image 52296433-4344-4796-825b-6405fe5eae5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:06:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.964 2 INFO nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Creating config drive at /var/lib/nova/instances/52296433-4344-4796-825b-6405fe5eae5d/disk.config#033[00m
Oct 11 00:06:55 np0005480824 nova_compute[260089]: 2025-10-11 04:06:55.968 2 DEBUG oslo_concurrency.processutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52296433-4344-4796-825b-6405fe5eae5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd6vl7wne execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.020 2 DEBUG nova.network.neutron [req-de70c5ba-b95d-4d0e-bd02-5aaca1f0c164 req-962328cd-d2e4-41b1-9378-61c5557d9df5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Updated VIF entry in instance network info cache for port 7452b5ba-837b-463f-9388-b4139a5e9f4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.022 2 DEBUG nova.network.neutron [req-de70c5ba-b95d-4d0e-bd02-5aaca1f0c164 req-962328cd-d2e4-41b1-9378-61c5557d9df5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Updating instance_info_cache with network_info: [{"id": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "address": "fa:16:3e:dd:3b:81", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7452b5ba-83", "ovs_interfaceid": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.039 2 DEBUG oslo_concurrency.lockutils [req-de70c5ba-b95d-4d0e-bd02-5aaca1f0c164 req-962328cd-d2e4-41b1-9378-61c5557d9df5 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-52296433-4344-4796-825b-6405fe5eae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:06:56 np0005480824 podman[305881]: 2025-10-11 04:06:56.071404794 +0000 UTC m=+0.117678707 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.103 2 DEBUG oslo_concurrency.processutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52296433-4344-4796-825b-6405fe5eae5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd6vl7wne" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.130 2 DEBUG nova.storage.rbd_utils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] rbd image 52296433-4344-4796-825b-6405fe5eae5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.134 2 DEBUG oslo_concurrency.processutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52296433-4344-4796-825b-6405fe5eae5d/disk.config 52296433-4344-4796-825b-6405fe5eae5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.296 2 DEBUG oslo_concurrency.processutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52296433-4344-4796-825b-6405fe5eae5d/disk.config 52296433-4344-4796-825b-6405fe5eae5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.297 2 INFO nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Deleting local config drive /var/lib/nova/instances/52296433-4344-4796-825b-6405fe5eae5d/disk.config because it was imported into RBD.#033[00m
Oct 11 00:06:56 np0005480824 kernel: tap7452b5ba-83: entered promiscuous mode
Oct 11 00:06:56 np0005480824 NetworkManager[44969]: <info>  [1760155616.3669] manager: (tap7452b5ba-83): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Oct 11 00:06:56 np0005480824 ovn_controller[152667]: 2025-10-11T04:06:56Z|00267|binding|INFO|Claiming lport 7452b5ba-837b-463f-9388-b4139a5e9f4f for this chassis.
Oct 11 00:06:56 np0005480824 ovn_controller[152667]: 2025-10-11T04:06:56Z|00268|binding|INFO|7452b5ba-837b-463f-9388-b4139a5e9f4f: Claiming fa:16:3e:dd:3b:81 10.100.0.13
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.431 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:3b:81 10.100.0.13'], port_security=['fa:16:3e:dd:3b:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '52296433-4344-4796-825b-6405fe5eae5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f367c6c5e8f479399a2004c82cfaff0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78484826-fa6d-47e8-af8c-1b198aee6eb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b37e59a3-7c4f-47c2-acd9-d3f9dd8c5f52, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=7452b5ba-837b-463f-9388-b4139a5e9f4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.432 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 7452b5ba-837b-463f-9388-b4139a5e9f4f in datapath abadcf46-9a41-4911-85e0-fbcde2d48b79 bound to our chassis#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.433 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abadcf46-9a41-4911-85e0-fbcde2d48b79#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.442 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[09773f7f-21ff-4229-997c-d969b54b7d52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.443 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabadcf46-91 in ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.447 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabadcf46-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.447 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[bc5bb059-dfc3-4d2d-801f-b13079ded4b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.447 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c249a1f0-1cbc-4c9e-9f2a-9cdb93a140df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 systemd-machined[215071]: New machine qemu-29-instance-0000001d.
Oct 11 00:06:56 np0005480824 systemd-udevd[305961]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 00:06:56 np0005480824 ovn_controller[152667]: 2025-10-11T04:06:56Z|00269|binding|INFO|Setting lport 7452b5ba-837b-463f-9388-b4139a5e9f4f ovn-installed in OVS
Oct 11 00:06:56 np0005480824 ovn_controller[152667]: 2025-10-11T04:06:56Z|00270|binding|INFO|Setting lport 7452b5ba-837b-463f-9388-b4139a5e9f4f up in Southbound
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:56 np0005480824 systemd[1]: Started Virtual Machine qemu-29-instance-0000001d.
Oct 11 00:06:56 np0005480824 NetworkManager[44969]: <info>  [1760155616.4620] device (tap7452b5ba-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.461 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[2e69bd40-83e2-4d05-b83b-1bdfbe112de3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 NetworkManager[44969]: <info>  [1760155616.4641] device (tap7452b5ba-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.477 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[94ab4f6d-af15-4b8c-aff0-ff00abcb8c6d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.508 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[ff7d1d9f-840d-4f44-8a50-2fd7362a439a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.512 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[16be6ee3-c417-4406-8b0e-bb2a8f94a439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 NetworkManager[44969]: <info>  [1760155616.5136] manager: (tapabadcf46-90): new Veth device (/org/freedesktop/NetworkManager/Devices/144)
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.544 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[6290e0ef-721e-4a68-9054-0e254117caed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.547 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[97a903dc-ddde-41e2-be2e-c3c88d2011a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 NetworkManager[44969]: <info>  [1760155616.5695] device (tapabadcf46-90): carrier: link connected
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.574 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[0a04b6de-b56c-4a51-8443-8aeaf5642cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.589 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[80aacd4a-fcaf-4c0d-907a-089518086d7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabadcf46-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:c9:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508423, 'reachable_time': 28494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305993, 'error': None, 'target': 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.602 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[bedaff65-83a8-4e14-9586-100b28655dbd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe72:c9bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508423, 'tstamp': 508423}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305994, 'error': None, 'target': 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.614 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ea127de7-ffc3-4109-945e-e3db17b21ec1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabadcf46-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:c9:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508423, 'reachable_time': 28494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305995, 'error': None, 'target': 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.638 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8c4808a1-90ce-448c-b7d7-3c54c90df766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.692 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e86338-002a-43fe-b5c6-0748b9dcaa71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.693 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabadcf46-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.694 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.694 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabadcf46-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:56 np0005480824 kernel: tapabadcf46-90: entered promiscuous mode
Oct 11 00:06:56 np0005480824 NetworkManager[44969]: <info>  [1760155616.6967] manager: (tapabadcf46-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.699 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabadcf46-90, col_values=(('external_ids', {'iface-id': '7b1d2367-bac7-4671-94ac-6b3206b5485c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:56 np0005480824 ovn_controller[152667]: 2025-10-11T04:06:56Z|00271|binding|INFO|Releasing lport 7b1d2367-bac7-4671-94ac-6b3206b5485c from this chassis (sb_readonly=0)
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.713 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abadcf46-9a41-4911-85e0-fbcde2d48b79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abadcf46-9a41-4911-85e0-fbcde2d48b79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.714 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[64c0933c-9594-4109-9fe8-ec20a8635a88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.715 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: global
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-abadcf46-9a41-4911-85e0-fbcde2d48b79
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/abadcf46-9a41-4911-85e0-fbcde2d48b79.pid.haproxy
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID abadcf46-9a41-4911-85e0-fbcde2d48b79
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 00:06:56 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:06:56.716 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'env', 'PROCESS_TAG=haproxy-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abadcf46-9a41-4911-85e0-fbcde2d48b79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.945 2 DEBUG nova.compute.manager [req-97d622f1-ea91-4aae-8446-11c9d56c301e req-2c52f4ed-dfb4-4517-aa3b-0428f19f3a61 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Received event network-vif-plugged-7452b5ba-837b-463f-9388-b4139a5e9f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.945 2 DEBUG oslo_concurrency.lockutils [req-97d622f1-ea91-4aae-8446-11c9d56c301e req-2c52f4ed-dfb4-4517-aa3b-0428f19f3a61 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "52296433-4344-4796-825b-6405fe5eae5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.945 2 DEBUG oslo_concurrency.lockutils [req-97d622f1-ea91-4aae-8446-11c9d56c301e req-2c52f4ed-dfb4-4517-aa3b-0428f19f3a61 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.946 2 DEBUG oslo_concurrency.lockutils [req-97d622f1-ea91-4aae-8446-11c9d56c301e req-2c52f4ed-dfb4-4517-aa3b-0428f19f3a61 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:56 np0005480824 nova_compute[260089]: 2025-10-11 04:06:56.946 2 DEBUG nova.compute.manager [req-97d622f1-ea91-4aae-8446-11c9d56c301e req-2c52f4ed-dfb4-4517-aa3b-0428f19f3a61 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Processing event network-vif-plugged-7452b5ba-837b-463f-9388-b4139a5e9f4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 00:06:57 np0005480824 podman[306027]: 2025-10-11 04:06:57.039728886 +0000 UTC m=+0.041612082 container create 981af86b39fd8e33172fc3f662f1bad99543554a0b9125e6022f16e42ea55e03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 00:06:57 np0005480824 systemd[1]: Started libpod-conmon-981af86b39fd8e33172fc3f662f1bad99543554a0b9125e6022f16e42ea55e03.scope.
Oct 11 00:06:57 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:06:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1616b1026d3bef457c502ae8f7dd2455aba09c4416f8922023350a93a000e32/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 00:06:57 np0005480824 podman[306027]: 2025-10-11 04:06:57.092956132 +0000 UTC m=+0.094839368 container init 981af86b39fd8e33172fc3f662f1bad99543554a0b9125e6022f16e42ea55e03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 00:06:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 202 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 341 B/s wr, 1 op/s
Oct 11 00:06:57 np0005480824 podman[306027]: 2025-10-11 04:06:57.099962347 +0000 UTC m=+0.101845553 container start 981af86b39fd8e33172fc3f662f1bad99543554a0b9125e6022f16e42ea55e03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 00:06:57 np0005480824 podman[306027]: 2025-10-11 04:06:57.016022097 +0000 UTC m=+0.017905313 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 00:06:57 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[306042]: [NOTICE]   (306046) : New worker (306048) forked
Oct 11 00:06:57 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[306042]: [NOTICE]   (306046) : Loading success.
Oct 11 00:06:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:06:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:06:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:06:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:06:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:06:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:06:59 np0005480824 nova_compute[260089]: 2025-10-11 04:06:59.028 2 DEBUG nova.compute.manager [req-708fa88f-1c72-46ac-924d-992e724a7028 req-9314b929-58c5-4924-ab00-98cb804d15ed 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Received event network-vif-plugged-7452b5ba-837b-463f-9388-b4139a5e9f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:06:59 np0005480824 nova_compute[260089]: 2025-10-11 04:06:59.030 2 DEBUG oslo_concurrency.lockutils [req-708fa88f-1c72-46ac-924d-992e724a7028 req-9314b929-58c5-4924-ab00-98cb804d15ed 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "52296433-4344-4796-825b-6405fe5eae5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:06:59 np0005480824 nova_compute[260089]: 2025-10-11 04:06:59.030 2 DEBUG oslo_concurrency.lockutils [req-708fa88f-1c72-46ac-924d-992e724a7028 req-9314b929-58c5-4924-ab00-98cb804d15ed 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:06:59 np0005480824 nova_compute[260089]: 2025-10-11 04:06:59.031 2 DEBUG oslo_concurrency.lockutils [req-708fa88f-1c72-46ac-924d-992e724a7028 req-9314b929-58c5-4924-ab00-98cb804d15ed 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:06:59 np0005480824 nova_compute[260089]: 2025-10-11 04:06:59.032 2 DEBUG nova.compute.manager [req-708fa88f-1c72-46ac-924d-992e724a7028 req-9314b929-58c5-4924-ab00-98cb804d15ed 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] No waiting events found dispatching network-vif-plugged-7452b5ba-837b-463f-9388-b4139a5e9f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:06:59 np0005480824 nova_compute[260089]: 2025-10-11 04:06:59.032 2 WARNING nova.compute.manager [req-708fa88f-1c72-46ac-924d-992e724a7028 req-9314b929-58c5-4924-ab00-98cb804d15ed 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Received unexpected event network-vif-plugged-7452b5ba-837b-463f-9388-b4139a5e9f4f for instance with vm_state building and task_state spawning.#033[00m
Oct 11 00:06:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 202 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.264 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155620.264145, 52296433-4344-4796-825b-6405fe5eae5d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.265 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 52296433-4344-4796-825b-6405fe5eae5d] VM Started (Lifecycle Event)#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.267 2 DEBUG nova.compute.manager [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.271 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.274 2 INFO nova.virt.libvirt.driver [-] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Instance spawned successfully.#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.274 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.298 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.302 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.307 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.307 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.307 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.308 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.308 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.308 2 DEBUG nova.virt.libvirt.driver [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.344 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 52296433-4344-4796-825b-6405fe5eae5d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.345 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155620.2643776, 52296433-4344-4796-825b-6405fe5eae5d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.345 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 52296433-4344-4796-825b-6405fe5eae5d] VM Paused (Lifecycle Event)#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.374 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.377 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155620.2700875, 52296433-4344-4796-825b-6405fe5eae5d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.377 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 52296433-4344-4796-825b-6405fe5eae5d] VM Resumed (Lifecycle Event)#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.389 2 INFO nova.compute.manager [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Took 9.33 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.390 2 DEBUG nova.compute.manager [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.401 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.404 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.436 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 52296433-4344-4796-825b-6405fe5eae5d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.462 2 INFO nova.compute.manager [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Took 11.31 seconds to build instance.#033[00m
Oct 11 00:07:00 np0005480824 nova_compute[260089]: 2025-10-11 04:07:00.478 2 DEBUG oslo_concurrency.lockutils [None req-1259ca6b-369b-43fd-aea9-78fae8372201 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:07:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 202 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 11 00:07:03 np0005480824 podman[306100]: 2025-10-11 04:07:03.004464709 +0000 UTC m=+0.061159474 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 11 00:07:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 202 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 00:07:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 202 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 00:07:05 np0005480824 nova_compute[260089]: 2025-10-11 04:07:05.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:07:05 np0005480824 nova_compute[260089]: 2025-10-11 04:07:05.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:07:05 np0005480824 nova_compute[260089]: 2025-10-11 04:07:05.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 00:07:05 np0005480824 nova_compute[260089]: 2025-10-11 04:07:05.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 00:07:05 np0005480824 nova_compute[260089]: 2025-10-11 04:07:05.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:05 np0005480824 nova_compute[260089]: 2025-10-11 04:07:05.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 00:07:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:07:06 np0005480824 nova_compute[260089]: 2025-10-11 04:07:06.287 2 DEBUG nova.compute.manager [req-986d8378-2d78-4b71-a2fc-462d84aa85db req-8dbc126f-1c9e-4e49-9f34-5e2fe49e1ef9 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Received event network-changed-7452b5ba-837b-463f-9388-b4139a5e9f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:07:06 np0005480824 nova_compute[260089]: 2025-10-11 04:07:06.288 2 DEBUG nova.compute.manager [req-986d8378-2d78-4b71-a2fc-462d84aa85db req-8dbc126f-1c9e-4e49-9f34-5e2fe49e1ef9 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Refreshing instance network info cache due to event network-changed-7452b5ba-837b-463f-9388-b4139a5e9f4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 00:07:06 np0005480824 nova_compute[260089]: 2025-10-11 04:07:06.288 2 DEBUG oslo_concurrency.lockutils [req-986d8378-2d78-4b71-a2fc-462d84aa85db req-8dbc126f-1c9e-4e49-9f34-5e2fe49e1ef9 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-52296433-4344-4796-825b-6405fe5eae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:07:06 np0005480824 nova_compute[260089]: 2025-10-11 04:07:06.289 2 DEBUG oslo_concurrency.lockutils [req-986d8378-2d78-4b71-a2fc-462d84aa85db req-8dbc126f-1c9e-4e49-9f34-5e2fe49e1ef9 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-52296433-4344-4796-825b-6405fe5eae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:07:06 np0005480824 nova_compute[260089]: 2025-10-11 04:07:06.289 2 DEBUG nova.network.neutron [req-986d8378-2d78-4b71-a2fc-462d84aa85db req-8dbc126f-1c9e-4e49-9f34-5e2fe49e1ef9 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Refreshing network info cache for port 7452b5ba-837b-463f-9388-b4139a5e9f4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 00:07:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 202 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 00:07:07 np0005480824 nova_compute[260089]: 2025-10-11 04:07:07.935 2 DEBUG nova.network.neutron [req-986d8378-2d78-4b71-a2fc-462d84aa85db req-8dbc126f-1c9e-4e49-9f34-5e2fe49e1ef9 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Updated VIF entry in instance network info cache for port 7452b5ba-837b-463f-9388-b4139a5e9f4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 00:07:07 np0005480824 nova_compute[260089]: 2025-10-11 04:07:07.936 2 DEBUG nova.network.neutron [req-986d8378-2d78-4b71-a2fc-462d84aa85db req-8dbc126f-1c9e-4e49-9f34-5e2fe49e1ef9 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Updating instance_info_cache with network_info: [{"id": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "address": "fa:16:3e:dd:3b:81", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7452b5ba-83", "ovs_interfaceid": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:07:07 np0005480824 nova_compute[260089]: 2025-10-11 04:07:07.955 2 DEBUG oslo_concurrency.lockutils [req-986d8378-2d78-4b71-a2fc-462d84aa85db req-8dbc126f-1c9e-4e49-9f34-5e2fe49e1ef9 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-52296433-4344-4796-825b-6405fe5eae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:07:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 202 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct 11 00:07:10 np0005480824 nova_compute[260089]: 2025-10-11 04:07:10.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:07:10 np0005480824 nova_compute[260089]: 2025-10-11 04:07:10.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:07:10 np0005480824 nova_compute[260089]: 2025-10-11 04:07:10.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 11 00:07:10 np0005480824 nova_compute[260089]: 2025-10-11 04:07:10.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 00:07:10 np0005480824 nova_compute[260089]: 2025-10-11 04:07:10.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 11 00:07:10 np0005480824 nova_compute[260089]: 2025-10-11 04:07:10.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:10.512 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:07:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:10.512 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:07:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:10.513 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:07:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 202 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Oct 11 00:07:11 np0005480824 ovn_controller[152667]: 2025-10-11T04:07:11Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:3b:81 10.100.0.13
Oct 11 00:07:11 np0005480824 ovn_controller[152667]: 2025-10-11T04:07:11Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:3b:81 10.100.0.13
Oct 11 00:07:12 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 11 00:07:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 210 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.5 MiB/s wr, 114 op/s
Oct 11 00:07:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 210 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 487 KiB/s rd, 2.5 MiB/s wr, 48 op/s
Oct 11 00:07:15 np0005480824 nova_compute[260089]: 2025-10-11 04:07:15.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:15 np0005480824 nova_compute[260089]: 2025-10-11 04:07:15.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:07:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 263 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 547 KiB/s rd, 5.6 MiB/s wr, 75 op/s
Oct 11 00:07:18 np0005480824 podman[306121]: 2025-10-11 04:07:18.992443793 +0000 UTC m=+0.049710334 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible)
Oct 11 00:07:18 np0005480824 podman[306120]: 2025-10-11 04:07:18.992942004 +0000 UTC m=+0.053262386 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 00:07:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 547 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Oct 11 00:07:20 np0005480824 nova_compute[260089]: 2025-10-11 04:07:20.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:20 np0005480824 nova_compute[260089]: 2025-10-11 04:07:20.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:07:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 547 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.164 2 DEBUG oslo_concurrency.lockutils [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "52296433-4344-4796-825b-6405fe5eae5d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.165 2 DEBUG oslo_concurrency.lockutils [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.165 2 DEBUG oslo_concurrency.lockutils [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "52296433-4344-4796-825b-6405fe5eae5d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.165 2 DEBUG oslo_concurrency.lockutils [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.166 2 DEBUG oslo_concurrency.lockutils [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.167 2 INFO nova.compute.manager [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Terminating instance#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.168 2 DEBUG nova.compute.manager [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 00:07:22 np0005480824 kernel: tap7452b5ba-83 (unregistering): left promiscuous mode
Oct 11 00:07:22 np0005480824 NetworkManager[44969]: <info>  [1760155642.2315] device (tap7452b5ba-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 00:07:22 np0005480824 ovn_controller[152667]: 2025-10-11T04:07:22Z|00272|binding|INFO|Releasing lport 7452b5ba-837b-463f-9388-b4139a5e9f4f from this chassis (sb_readonly=0)
Oct 11 00:07:22 np0005480824 ovn_controller[152667]: 2025-10-11T04:07:22Z|00273|binding|INFO|Setting lport 7452b5ba-837b-463f-9388-b4139a5e9f4f down in Southbound
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:22 np0005480824 ovn_controller[152667]: 2025-10-11T04:07:22Z|00274|binding|INFO|Removing iface tap7452b5ba-83 ovn-installed in OVS
Oct 11 00:07:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:22.254 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:3b:81 10.100.0.13'], port_security=['fa:16:3e:dd:3b:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '52296433-4344-4796-825b-6405fe5eae5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f367c6c5e8f479399a2004c82cfaff0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78484826-fa6d-47e8-af8c-1b198aee6eb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b37e59a3-7c4f-47c2-acd9-d3f9dd8c5f52, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=7452b5ba-837b-463f-9388-b4139a5e9f4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:07:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:22.255 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 7452b5ba-837b-463f-9388-b4139a5e9f4f in datapath abadcf46-9a41-4911-85e0-fbcde2d48b79 unbound from our chassis#033[00m
Oct 11 00:07:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:22.256 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abadcf46-9a41-4911-85e0-fbcde2d48b79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 00:07:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:22.256 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9e0524-0f77-43e8-a09a-fbfcdea53c1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:22.257 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 namespace which is not needed anymore#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:22 np0005480824 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct 11 00:07:22 np0005480824 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Consumed 16.416s CPU time.
Oct 11 00:07:22 np0005480824 systemd-machined[215071]: Machine qemu-29-instance-0000001d terminated.
Oct 11 00:07:22 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[306042]: [NOTICE]   (306046) : haproxy version is 2.8.14-c23fe91
Oct 11 00:07:22 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[306042]: [NOTICE]   (306046) : path to executable is /usr/sbin/haproxy
Oct 11 00:07:22 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[306042]: [WARNING]  (306046) : Exiting Master process...
Oct 11 00:07:22 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[306042]: [ALERT]    (306046) : Current worker (306048) exited with code 143 (Terminated)
Oct 11 00:07:22 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[306042]: [WARNING]  (306046) : All workers exited. Exiting... (0)
Oct 11 00:07:22 np0005480824 systemd[1]: libpod-981af86b39fd8e33172fc3f662f1bad99543554a0b9125e6022f16e42ea55e03.scope: Deactivated successfully.
Oct 11 00:07:22 np0005480824 podman[306181]: 2025-10-11 04:07:22.392065819 +0000 UTC m=+0.049159711 container died 981af86b39fd8e33172fc3f662f1bad99543554a0b9125e6022f16e42ea55e03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.413 2 INFO nova.virt.libvirt.driver [-] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Instance destroyed successfully.#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.413 2 DEBUG nova.objects.instance [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lazy-loading 'resources' on Instance uuid 52296433-4344-4796-825b-6405fe5eae5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:07:22 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-981af86b39fd8e33172fc3f662f1bad99543554a0b9125e6022f16e42ea55e03-userdata-shm.mount: Deactivated successfully.
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.431 2 DEBUG nova.virt.libvirt.vif [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:06:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-771287187',display_name='tempest-TestEncryptedCinderVolumes-server-771287187',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-771287187',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNT5nRmfgUQpJQihppMhJ/PJtl2PXt4LF+4fCTR7CvYlNKAHH53rCj1YReitA5DOkjFToqvFLFWF74Q9GO2rD7zoT+ufORFGj1sd+RhwvHNqWv6rQH+IM1H5SH+IwmGWpQ==',key_name='tempest-TestEncryptedCinderVolumes-517148477',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:07:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f367c6c5e8f479399a2004c82cfaff0',ramdisk_id='',reservation_id='r-r2rls36l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-781713731',owner_user_name='tempest-TestEncryptedCinderVolumes-781713731-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:07:00Z,user_data=None,user_id='f9202e7d8882475ba6a769d9c59c35fd',uuid=52296433-4344-4796-825b-6405fe5eae5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "address": "fa:16:3e:dd:3b:81", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7452b5ba-83", "ovs_interfaceid": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.433 2 DEBUG nova.network.os_vif_util [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converting VIF {"id": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "address": "fa:16:3e:dd:3b:81", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7452b5ba-83", "ovs_interfaceid": "7452b5ba-837b-463f-9388-b4139a5e9f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:07:22 np0005480824 systemd[1]: var-lib-containers-storage-overlay-e1616b1026d3bef457c502ae8f7dd2455aba09c4416f8922023350a93a000e32-merged.mount: Deactivated successfully.
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.434 2 DEBUG nova.network.os_vif_util [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:3b:81,bridge_name='br-int',has_traffic_filtering=True,id=7452b5ba-837b-463f-9388-b4139a5e9f4f,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7452b5ba-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.435 2 DEBUG os_vif [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:3b:81,bridge_name='br-int',has_traffic_filtering=True,id=7452b5ba-837b-463f-9388-b4139a5e9f4f,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7452b5ba-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.438 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7452b5ba-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:22 np0005480824 podman[306181]: 2025-10-11 04:07:22.442793086 +0000 UTC m=+0.099886978 container cleanup 981af86b39fd8e33172fc3f662f1bad99543554a0b9125e6022f16e42ea55e03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.444 2 INFO os_vif [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:3b:81,bridge_name='br-int',has_traffic_filtering=True,id=7452b5ba-837b-463f-9388-b4139a5e9f4f,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7452b5ba-83')#033[00m
Oct 11 00:07:22 np0005480824 systemd[1]: libpod-conmon-981af86b39fd8e33172fc3f662f1bad99543554a0b9125e6022f16e42ea55e03.scope: Deactivated successfully.
Oct 11 00:07:22 np0005480824 podman[306217]: 2025-10-11 04:07:22.511467809 +0000 UTC m=+0.045173419 container remove 981af86b39fd8e33172fc3f662f1bad99543554a0b9125e6022f16e42ea55e03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 00:07:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:22.517 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[792b2718-7d24-478f-92eb-0014235a121e]: (4, ('Sat Oct 11 04:07:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 (981af86b39fd8e33172fc3f662f1bad99543554a0b9125e6022f16e42ea55e03)\n981af86b39fd8e33172fc3f662f1bad99543554a0b9125e6022f16e42ea55e03\nSat Oct 11 04:07:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 (981af86b39fd8e33172fc3f662f1bad99543554a0b9125e6022f16e42ea55e03)\n981af86b39fd8e33172fc3f662f1bad99543554a0b9125e6022f16e42ea55e03\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:22.519 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[83ccf0e2-065f-4899-833e-74c469e38df8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:22.521 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabadcf46-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:22 np0005480824 kernel: tapabadcf46-90: left promiscuous mode
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:22.539 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[c2992eae-6d41-42fe-9b8b-809d9b3cd3c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:22.569 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[88f1dba9-27fb-4cf5-bfe9-4ca709fd529d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:22.570 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[4b170095-1781-4985-9dfc-f37afb25339e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:22.585 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[74b3fea1-9838-49cd-8165-5f4f53ad32ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508416, 'reachable_time': 15557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306251, 'error': None, 'target': 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:22 np0005480824 systemd[1]: run-netns-ovnmeta\x2dabadcf46\x2d9a41\x2d4911\x2d85e0\x2dfbcde2d48b79.mount: Deactivated successfully.
Oct 11 00:07:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:22.588 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 00:07:22 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:22.588 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[b162f0a7-5b79-4a07-ae16-6576b582727a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.639 2 INFO nova.virt.libvirt.driver [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Deleting instance files /var/lib/nova/instances/52296433-4344-4796-825b-6405fe5eae5d_del#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.640 2 INFO nova.virt.libvirt.driver [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Deletion of /var/lib/nova/instances/52296433-4344-4796-825b-6405fe5eae5d_del complete#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.686 2 INFO nova.compute.manager [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Took 0.52 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.687 2 DEBUG oslo.service.loopingcall [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.689 2 DEBUG nova.compute.manager [-] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 00:07:22 np0005480824 nova_compute[260089]: 2025-10-11 04:07:22.689 2 DEBUG nova.network.neutron [-] [instance: 52296433-4344-4796-825b-6405fe5eae5d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 00:07:23 np0005480824 nova_compute[260089]: 2025-10-11 04:07:23.087 2 DEBUG nova.compute.manager [req-88472425-23f6-41f7-aa26-f356bc2ddc80 req-cb8d4443-94e1-4bab-a84d-ffdb6d8584cb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Received event network-vif-unplugged-7452b5ba-837b-463f-9388-b4139a5e9f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:07:23 np0005480824 nova_compute[260089]: 2025-10-11 04:07:23.087 2 DEBUG oslo_concurrency.lockutils [req-88472425-23f6-41f7-aa26-f356bc2ddc80 req-cb8d4443-94e1-4bab-a84d-ffdb6d8584cb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "52296433-4344-4796-825b-6405fe5eae5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:07:23 np0005480824 nova_compute[260089]: 2025-10-11 04:07:23.088 2 DEBUG oslo_concurrency.lockutils [req-88472425-23f6-41f7-aa26-f356bc2ddc80 req-cb8d4443-94e1-4bab-a84d-ffdb6d8584cb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:07:23 np0005480824 nova_compute[260089]: 2025-10-11 04:07:23.088 2 DEBUG oslo_concurrency.lockutils [req-88472425-23f6-41f7-aa26-f356bc2ddc80 req-cb8d4443-94e1-4bab-a84d-ffdb6d8584cb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:23 np0005480824 nova_compute[260089]: 2025-10-11 04:07:23.089 2 DEBUG nova.compute.manager [req-88472425-23f6-41f7-aa26-f356bc2ddc80 req-cb8d4443-94e1-4bab-a84d-ffdb6d8584cb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] No waiting events found dispatching network-vif-unplugged-7452b5ba-837b-463f-9388-b4139a5e9f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:07:23 np0005480824 nova_compute[260089]: 2025-10-11 04:07:23.089 2 DEBUG nova.compute.manager [req-88472425-23f6-41f7-aa26-f356bc2ddc80 req-cb8d4443-94e1-4bab-a84d-ffdb6d8584cb 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Received event network-vif-unplugged-7452b5ba-837b-463f-9388-b4139a5e9f4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 00:07:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 556 KiB/s rd, 5.8 MiB/s wr, 86 op/s
Oct 11 00:07:23 np0005480824 nova_compute[260089]: 2025-10-11 04:07:23.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:23 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:23.211 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:07:23 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:23.211 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 00:07:23 np0005480824 nova_compute[260089]: 2025-10-11 04:07:23.575 2 DEBUG nova.network.neutron [-] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:07:23 np0005480824 nova_compute[260089]: 2025-10-11 04:07:23.594 2 INFO nova.compute.manager [-] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Took 0.90 seconds to deallocate network for instance.#033[00m
Oct 11 00:07:23 np0005480824 nova_compute[260089]: 2025-10-11 04:07:23.666 2 DEBUG nova.compute.manager [req-13066bf8-527a-4965-9c6c-435a05935d89 req-76f29c83-1cd1-4487-ac40-8ac8d6052e29 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Received event network-vif-deleted-7452b5ba-837b-463f-9388-b4139a5e9f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:07:23 np0005480824 nova_compute[260089]: 2025-10-11 04:07:23.814 2 INFO nova.compute.manager [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Took 0.22 seconds to detach 1 volumes for instance.#033[00m
Oct 11 00:07:23 np0005480824 nova_compute[260089]: 2025-10-11 04:07:23.857 2 DEBUG oslo_concurrency.lockutils [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:07:23 np0005480824 nova_compute[260089]: 2025-10-11 04:07:23.858 2 DEBUG oslo_concurrency.lockutils [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:07:23 np0005480824 nova_compute[260089]: 2025-10-11 04:07:23.909 2 DEBUG oslo_concurrency.processutils [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:07:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:07:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4222261044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:07:24 np0005480824 nova_compute[260089]: 2025-10-11 04:07:24.323 2 DEBUG oslo_concurrency.processutils [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:07:24 np0005480824 nova_compute[260089]: 2025-10-11 04:07:24.329 2 DEBUG nova.compute.provider_tree [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:07:24 np0005480824 nova_compute[260089]: 2025-10-11 04:07:24.348 2 DEBUG nova.scheduler.client.report [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:07:24 np0005480824 nova_compute[260089]: 2025-10-11 04:07:24.425 2 DEBUG oslo_concurrency.lockutils [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:24 np0005480824 nova_compute[260089]: 2025-10-11 04:07:24.460 2 INFO nova.scheduler.client.report [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Deleted allocations for instance 52296433-4344-4796-825b-6405fe5eae5d#033[00m
Oct 11 00:07:24 np0005480824 nova_compute[260089]: 2025-10-11 04:07:24.565 2 DEBUG oslo_concurrency.lockutils [None req-af5dd935-eaed-404b-847b-f8d0319cdab3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:07:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1467348354' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:07:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:07:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1467348354' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:07:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 3.3 MiB/s wr, 37 op/s
Oct 11 00:07:25 np0005480824 nova_compute[260089]: 2025-10-11 04:07:25.210 2 DEBUG nova.compute.manager [req-63896d3c-f567-487c-aa99-ef117d935f6c req-b107dbad-9e65-4bbe-938a-5b37c334c869 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Received event network-vif-plugged-7452b5ba-837b-463f-9388-b4139a5e9f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:07:25 np0005480824 nova_compute[260089]: 2025-10-11 04:07:25.211 2 DEBUG oslo_concurrency.lockutils [req-63896d3c-f567-487c-aa99-ef117d935f6c req-b107dbad-9e65-4bbe-938a-5b37c334c869 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "52296433-4344-4796-825b-6405fe5eae5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:07:25 np0005480824 nova_compute[260089]: 2025-10-11 04:07:25.211 2 DEBUG oslo_concurrency.lockutils [req-63896d3c-f567-487c-aa99-ef117d935f6c req-b107dbad-9e65-4bbe-938a-5b37c334c869 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:07:25 np0005480824 nova_compute[260089]: 2025-10-11 04:07:25.212 2 DEBUG oslo_concurrency.lockutils [req-63896d3c-f567-487c-aa99-ef117d935f6c req-b107dbad-9e65-4bbe-938a-5b37c334c869 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "52296433-4344-4796-825b-6405fe5eae5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:25 np0005480824 nova_compute[260089]: 2025-10-11 04:07:25.212 2 DEBUG nova.compute.manager [req-63896d3c-f567-487c-aa99-ef117d935f6c req-b107dbad-9e65-4bbe-938a-5b37c334c869 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] No waiting events found dispatching network-vif-plugged-7452b5ba-837b-463f-9388-b4139a5e9f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:07:25 np0005480824 nova_compute[260089]: 2025-10-11 04:07:25.212 2 WARNING nova.compute.manager [req-63896d3c-f567-487c-aa99-ef117d935f6c req-b107dbad-9e65-4bbe-938a-5b37c334c869 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Received unexpected event network-vif-plugged-7452b5ba-837b-463f-9388-b4139a5e9f4f for instance with vm_state deleted and task_state None.#033[00m
Oct 11 00:07:25 np0005480824 nova_compute[260089]: 2025-10-11 04:07:25.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:07:27 np0005480824 podman[306275]: 2025-10-11 04:07:27.087939822 +0000 UTC m=+0.135564275 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 11 00:07:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 3.3 MiB/s wr, 45 op/s
Oct 11 00:07:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e495 do_prune osdmap full prune enabled
Oct 11 00:07:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 e496: 3 total, 3 up, 3 in
Oct 11 00:07:27 np0005480824 ceph-mon[74326]: log_channel(cluster) log [DBG] : osdmap e496: 3 total, 3 up, 3 in
Oct 11 00:07:27 np0005480824 nova_compute[260089]: 2025-10-11 04:07:27.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:07:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:07:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:07:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:07:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:07:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:07:27 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:07:27 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/953588267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:07:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_04:07:27
Oct 11 00:07:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 00:07:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 11 00:07:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'volumes', 'vms', 'backups', 'default.rgw.control']
Oct 11 00:07:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 11 00:07:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 00:07:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:07:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 00:07:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:07:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:07:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:07:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:07:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:07:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:07:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:07:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 18 KiB/s wr, 22 op/s
Oct 11 00:07:30 np0005480824 nova_compute[260089]: 2025-10-11 04:07:30.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:07:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 18 KiB/s wr, 22 op/s
Oct 11 00:07:31 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:31.214 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:07:31 np0005480824 nova_compute[260089]: 2025-10-11 04:07:31.893 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:07:31 np0005480824 nova_compute[260089]: 2025-10-11 04:07:31.893 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:07:31 np0005480824 nova_compute[260089]: 2025-10-11 04:07:31.926 2 DEBUG nova.compute.manager [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.040 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.041 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.050 2 DEBUG nova.virt.hardware [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.050 2 INFO nova.compute.claims [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.215 2 DEBUG oslo_concurrency.processutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:32 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:07:32 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3113352310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.637 2 DEBUG oslo_concurrency.processutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.641 2 DEBUG nova.compute.provider_tree [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.673 2 DEBUG nova.scheduler.client.report [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.740 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.741 2 DEBUG nova.compute.manager [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.801 2 DEBUG nova.compute.manager [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.802 2 DEBUG nova.network.neutron [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.825 2 INFO nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.846 2 DEBUG nova.compute.manager [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 11 00:07:32 np0005480824 nova_compute[260089]: 2025-10-11 04:07:32.897 2 INFO nova.virt.block_device [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Booting with volume f09bbfa0-0d4a-4b1f-8cbb-7a3fdb92cde9 at /dev/vda#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.101 2 DEBUG os_brick.utils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.102 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:07:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 3.1 KiB/s wr, 35 op/s
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.120 676 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.121 676 DEBUG oslo.privsep.daemon [-] privsep: reply[60262d9a-d143-4516-afac-8b5afda3a707]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.123 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.137 676 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.137 676 DEBUG oslo.privsep.daemon [-] privsep: reply[75c32c6e-768c-4eee-ad88-0b09c9c0e347]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d5d671ddab5a', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.139 676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.148 676 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.148 676 DEBUG oslo.privsep.daemon [-] privsep: reply[53477ddc-9491-430f-aca0-771640c0afe1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.150 676 DEBUG oslo.privsep.daemon [-] privsep: reply[06c86b09-3e48-434e-9c19-07b5b4862deb]: (4, 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.151 2 DEBUG oslo_concurrency.processutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.177 2 DEBUG oslo_concurrency.processutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.181 2 DEBUG os_brick.initiator.connectors.lightos [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.182 2 DEBUG os_brick.initiator.connectors.lightos [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.183 2 DEBUG os_brick.initiator.connectors.lightos [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.183 2 DEBUG os_brick.utils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] <== get_connector_properties: return (82ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d5d671ddab5a', 'do_local_attach': False, 'nvme_hostid': '83042a20-0f72-4c47-8453-e72ead378624', 'system uuid': 'fb3a2fb1-9efa-43f0-a057-bf422ac6b8d7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:83042a20-0f72-4c47-8453-e72ead378624', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.184 2 DEBUG nova.virt.block_device [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Updating existing volume attachment record: 2b76a4a3-a0c1-4c1e-bb3d-e1d6372c3a35 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.235070) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155653235144, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2090, "num_deletes": 252, "total_data_size": 3347838, "memory_usage": 3410896, "flush_reason": "Manual Compaction"}
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155653250464, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3291786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36965, "largest_seqno": 39054, "table_properties": {"data_size": 3282295, "index_size": 5985, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19596, "raw_average_key_size": 20, "raw_value_size": 3263185, "raw_average_value_size": 3385, "num_data_blocks": 265, "num_entries": 964, "num_filter_entries": 964, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155436, "oldest_key_time": 1760155436, "file_creation_time": 1760155653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 15444 microseconds, and 7599 cpu microseconds.
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.250521) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3291786 bytes OK
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.250543) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.252388) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.252401) EVENT_LOG_v1 {"time_micros": 1760155653252397, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.252419) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3339040, prev total WAL file size 3339040, number of live WAL files 2.
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.253429) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3214KB)], [77(10047KB)]
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155653253492, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 13580723, "oldest_snapshot_seqno": -1}
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 7097 keys, 11841326 bytes, temperature: kUnknown
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155653309753, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 11841326, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11786665, "index_size": 35797, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17797, "raw_key_size": 178969, "raw_average_key_size": 25, "raw_value_size": 11652086, "raw_average_value_size": 1641, "num_data_blocks": 1428, "num_entries": 7097, "num_filter_entries": 7097, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760155653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.309996) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 11841326 bytes
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.311441) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.1 rd, 210.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 9.8 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 7620, records dropped: 523 output_compression: NoCompression
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.311458) EVENT_LOG_v1 {"time_micros": 1760155653311449, "job": 44, "event": "compaction_finished", "compaction_time_micros": 56318, "compaction_time_cpu_micros": 25179, "output_level": 6, "num_output_files": 1, "total_output_size": 11841326, "num_input_records": 7620, "num_output_records": 7097, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155653312008, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155653313479, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.253321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.313546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.313552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.313554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.313556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:07:33.313558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:07:33 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2938635501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:07:33 np0005480824 nova_compute[260089]: 2025-10-11 04:07:33.859 2 DEBUG nova.policy [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9202e7d8882475ba6a769d9c59c35fd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f367c6c5e8f479399a2004c82cfaff0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 11 00:07:33 np0005480824 podman[306330]: 2025-10-11 04:07:33.996524889 +0000 UTC m=+0.056344847 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:07:34 np0005480824 nova_compute[260089]: 2025-10-11 04:07:34.133 2 DEBUG nova.compute.manager [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 11 00:07:34 np0005480824 nova_compute[260089]: 2025-10-11 04:07:34.136 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 11 00:07:34 np0005480824 nova_compute[260089]: 2025-10-11 04:07:34.137 2 INFO nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Creating image(s)#033[00m
Oct 11 00:07:34 np0005480824 nova_compute[260089]: 2025-10-11 04:07:34.138 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 11 00:07:34 np0005480824 nova_compute[260089]: 2025-10-11 04:07:34.139 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Ensure instance console log exists: /var/lib/nova/instances/3f57226b-3e1a-4f83-9b96-9b5a7ff37910/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 11 00:07:34 np0005480824 nova_compute[260089]: 2025-10-11 04:07:34.140 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:07:34 np0005480824 nova_compute[260089]: 2025-10-11 04:07:34.141 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:07:34 np0005480824 nova_compute[260089]: 2025-10-11 04:07:34.141 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:34 np0005480824 nova_compute[260089]: 2025-10-11 04:07:34.595 2 DEBUG nova.network.neutron [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Successfully created port: 1cecff65-5dca-4e92-9f18-a4729f87c434 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 11 00:07:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 3.1 KiB/s wr, 35 op/s
Oct 11 00:07:35 np0005480824 nova_compute[260089]: 2025-10-11 04:07:35.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:07:36 np0005480824 nova_compute[260089]: 2025-10-11 04:07:36.941 2 DEBUG nova.network.neutron [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Successfully updated port: 1cecff65-5dca-4e92-9f18-a4729f87c434 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 11 00:07:36 np0005480824 nova_compute[260089]: 2025-10-11 04:07:36.966 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "refresh_cache-3f57226b-3e1a-4f83-9b96-9b5a7ff37910" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:07:36 np0005480824 nova_compute[260089]: 2025-10-11 04:07:36.966 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquired lock "refresh_cache-3f57226b-3e1a-4f83-9b96-9b5a7ff37910" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:07:36 np0005480824 nova_compute[260089]: 2025-10-11 04:07:36.966 2 DEBUG nova.network.neutron [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 11 00:07:37 np0005480824 nova_compute[260089]: 2025-10-11 04:07:37.064 2 DEBUG nova.compute.manager [req-c8df7a92-d3f3-428e-8215-b3e7f745a768 req-8a8c4498-d0a1-4a27-b041-2bd0c3723a79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Received event network-changed-1cecff65-5dca-4e92-9f18-a4729f87c434 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:07:37 np0005480824 nova_compute[260089]: 2025-10-11 04:07:37.065 2 DEBUG nova.compute.manager [req-c8df7a92-d3f3-428e-8215-b3e7f745a768 req-8a8c4498-d0a1-4a27-b041-2bd0c3723a79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Refreshing instance network info cache due to event network-changed-1cecff65-5dca-4e92-9f18-a4729f87c434. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 00:07:37 np0005480824 nova_compute[260089]: 2025-10-11 04:07:37.065 2 DEBUG oslo_concurrency.lockutils [req-c8df7a92-d3f3-428e-8215-b3e7f745a768 req-8a8c4498-d0a1-4a27-b041-2bd0c3723a79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-3f57226b-3e1a-4f83-9b96-9b5a7ff37910" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:07:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 26 op/s
Oct 11 00:07:37 np0005480824 nova_compute[260089]: 2025-10-11 04:07:37.408 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155642.4053888, 52296433-4344-4796-825b-6405fe5eae5d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:07:37 np0005480824 nova_compute[260089]: 2025-10-11 04:07:37.409 2 INFO nova.compute.manager [-] [instance: 52296433-4344-4796-825b-6405fe5eae5d] VM Stopped (Lifecycle Event)#033[00m
Oct 11 00:07:37 np0005480824 nova_compute[260089]: 2025-10-11 04:07:37.441 2 DEBUG nova.compute.manager [None req-c5fc9bfa-dd62-48c3-90b6-b0ee842ecbd1 - - - - - -] [instance: 52296433-4344-4796-825b-6405fe5eae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:07:37 np0005480824 nova_compute[260089]: 2025-10-11 04:07:37.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:37 np0005480824 nova_compute[260089]: 2025-10-11 04:07:37.698 2 DEBUG nova.network.neutron [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894585429283063 of space, bias 1.0, pg target 0.8683756287849189 quantized to 32 (current 32)
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:07:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.803 2 DEBUG nova.network.neutron [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Updating instance_info_cache with network_info: [{"id": "1cecff65-5dca-4e92-9f18-a4729f87c434", "address": "fa:16:3e:55:04:a0", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cecff65-5d", "ovs_interfaceid": "1cecff65-5dca-4e92-9f18-a4729f87c434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.827 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Releasing lock "refresh_cache-3f57226b-3e1a-4f83-9b96-9b5a7ff37910" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.827 2 DEBUG nova.compute.manager [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Instance network_info: |[{"id": "1cecff65-5dca-4e92-9f18-a4729f87c434", "address": "fa:16:3e:55:04:a0", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cecff65-5d", "ovs_interfaceid": "1cecff65-5dca-4e92-9f18-a4729f87c434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.828 2 DEBUG oslo_concurrency.lockutils [req-c8df7a92-d3f3-428e-8215-b3e7f745a768 req-8a8c4498-d0a1-4a27-b041-2bd0c3723a79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-3f57226b-3e1a-4f83-9b96-9b5a7ff37910" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.829 2 DEBUG nova.network.neutron [req-c8df7a92-d3f3-428e-8215-b3e7f745a768 req-8a8c4498-d0a1-4a27-b041-2bd0c3723a79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Refreshing network info cache for port 1cecff65-5dca-4e92-9f18-a4729f87c434 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.834 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Start _get_guest_xml network_info=[{"id": "1cecff65-5dca-4e92-9f18-a4729f87c434", "address": "fa:16:3e:55:04:a0", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cecff65-5d", "ovs_interfaceid": "1cecff65-5dca-4e92-9f18-a4729f87c434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '2b76a4a3-a0c1-4c1e-bb3d-e1d6372c3a35', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f09bbfa0-0d4a-4b1f-8cbb-7a3fdb92cde9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f09bbfa0-0d4a-4b1f-8cbb-7a3fdb92cde9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '3f57226b-3e1a-4f83-9b96-9b5a7ff37910', 'attached_at': '', 'detached_at': '', 'volume_id': 'f09bbfa0-0d4a-4b1f-8cbb-7a3fdb92cde9', 'serial': 'f09bbfa0-0d4a-4b1f-8cbb-7a3fdb92cde9'}, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.842 2 WARNING nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.858 2 DEBUG nova.virt.libvirt.host [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.859 2 DEBUG nova.virt.libvirt.host [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.864 2 DEBUG nova.virt.libvirt.host [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.865 2 DEBUG nova.virt.libvirt.host [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.866 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.867 2 DEBUG nova.virt.hardware [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T03:44:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6707ecae-2ae2-4c2d-86dc-409bac38f6a5',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.868 2 DEBUG nova.virt.hardware [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.868 2 DEBUG nova.virt.hardware [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.868 2 DEBUG nova.virt.hardware [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.869 2 DEBUG nova.virt.hardware [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.869 2 DEBUG nova.virt.hardware [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.869 2 DEBUG nova.virt.hardware [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.870 2 DEBUG nova.virt.hardware [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.870 2 DEBUG nova.virt.hardware [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.871 2 DEBUG nova.virt.hardware [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.871 2 DEBUG nova.virt.hardware [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.899 2 DEBUG nova.storage.rbd_utils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] rbd image 3f57226b-3e1a-4f83-9b96-9b5a7ff37910_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:07:38 np0005480824 nova_compute[260089]: 2025-10-11 04:07:38.903 2 DEBUG oslo_concurrency.processutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:07:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 21 op/s
Oct 11 00:07:39 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 00:07:39 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4064922926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.321 2 DEBUG oslo_concurrency.processutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.481 2 DEBUG os_brick.encryptors [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Using volume encryption metadata '{'encryption_key_id': 'a4f53daa-2015-4c8f-9c81-fcaac905bb90', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f09bbfa0-0d4a-4b1f-8cbb-7a3fdb92cde9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f09bbfa0-0d4a-4b1f-8cbb-7a3fdb92cde9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '3f57226b-3e1a-4f83-9b96-9b5a7ff37910', 'attached_at': '', 'detached_at': '', 'volume_id': 'f09bbfa0-0d4a-4b1f-8cbb-7a3fdb92cde9', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.483 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.499 2 DEBUG barbicanclient.v1.secrets [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.499 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.528 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.529 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.564 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.564 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.589 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.590 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.627 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.628 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.667 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.668 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.709 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.710 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.736 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.737 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.770 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.771 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.795 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.796 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.828 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.829 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.857 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.858 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.896 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.897 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.937 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.938 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.960 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.961 2 INFO barbicanclient.base [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Calculated Secrets uuid ref: secrets/a4f53daa-2015-4c8f-9c81-fcaac905bb90#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.987 2 DEBUG barbicanclient.client [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Oct 11 00:07:39 np0005480824 nova_compute[260089]: 2025-10-11 04:07:39.988 2 DEBUG nova.virt.libvirt.host [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Secret XML: <secret ephemeral="no" private="no">
Oct 11 00:07:39 np0005480824 nova_compute[260089]:  <usage type="volume">
Oct 11 00:07:39 np0005480824 nova_compute[260089]:    <volume>f09bbfa0-0d4a-4b1f-8cbb-7a3fdb92cde9</volume>
Oct 11 00:07:39 np0005480824 nova_compute[260089]:  </usage>
Oct 11 00:07:39 np0005480824 nova_compute[260089]: </secret>
Oct 11 00:07:39 np0005480824 nova_compute[260089]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.082 2 DEBUG nova.virt.libvirt.vif [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:07:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-2054399998',display_name='tempest-TestEncryptedCinderVolumes-server-2054399998',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-2054399998',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNT5nRmfgUQpJQihppMhJ/PJtl2PXt4LF+4fCTR7CvYlNKAHH53rCj1YReitA5DOkjFToqvFLFWF74Q9GO2rD7zoT+ufORFGj1sd+RhwvHNqWv6rQH+IM1H5SH+IwmGWpQ==',key_name='tempest-TestEncryptedCinderVolumes-517148477',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f367c6c5e8f479399a2004c82cfaff0',ramdisk_id='',reservation_id='r-ug5fzirc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-781713731',owner_user_name='tempest-TestEncryptedCinderVolumes-781713731-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:07:32Z,user_data=None,user_id='f9202e7d8882475ba6a769d9c59c35fd',uuid=3f57226b-3e1a-4f83-9b96-9b5a7ff37910,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cecff65-5dca-4e92-9f18-a4729f87c434", "address": "fa:16:3e:55:04:a0", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cecff65-5d", "ovs_interfaceid": "1cecff65-5dca-4e92-9f18-a4729f87c434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.083 2 DEBUG nova.network.os_vif_util [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converting VIF {"id": "1cecff65-5dca-4e92-9f18-a4729f87c434", "address": "fa:16:3e:55:04:a0", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cecff65-5d", "ovs_interfaceid": "1cecff65-5dca-4e92-9f18-a4729f87c434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.083 2 DEBUG nova.network.os_vif_util [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=1cecff65-5dca-4e92-9f18-a4729f87c434,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cecff65-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.085 2 DEBUG nova.objects.instance [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f57226b-3e1a-4f83-9b96-9b5a7ff37910 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.098 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] End _get_guest_xml xml=<domain type="kvm">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  <uuid>3f57226b-3e1a-4f83-9b96-9b5a7ff37910</uuid>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  <name>instance-0000001e</name>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  <memory>131072</memory>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  <vcpu>1</vcpu>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  <metadata>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-2054399998</nova:name>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <nova:creationTime>2025-10-11 04:07:38</nova:creationTime>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <nova:flavor name="m1.nano">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:        <nova:memory>128</nova:memory>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:        <nova:disk>1</nova:disk>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:        <nova:swap>0</nova:swap>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:        <nova:ephemeral>0</nova:ephemeral>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:        <nova:vcpus>1</nova:vcpus>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      </nova:flavor>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <nova:owner>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:        <nova:user uuid="f9202e7d8882475ba6a769d9c59c35fd">tempest-TestEncryptedCinderVolumes-781713731-project-member</nova:user>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:        <nova:project uuid="6f367c6c5e8f479399a2004c82cfaff0">tempest-TestEncryptedCinderVolumes-781713731</nova:project>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      </nova:owner>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <nova:ports>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:        <nova:port uuid="1cecff65-5dca-4e92-9f18-a4729f87c434">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:        </nova:port>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      </nova:ports>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    </nova:instance>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  </metadata>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  <sysinfo type="smbios">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <system>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <entry name="manufacturer">RDO</entry>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <entry name="product">OpenStack Compute</entry>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <entry name="serial">3f57226b-3e1a-4f83-9b96-9b5a7ff37910</entry>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <entry name="uuid">3f57226b-3e1a-4f83-9b96-9b5a7ff37910</entry>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <entry name="family">Virtual Machine</entry>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    </system>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  </sysinfo>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  <os>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <boot dev="hd"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <smbios mode="sysinfo"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  </os>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  <features>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <acpi/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <apic/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <vmcoreinfo/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  </features>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  <clock offset="utc">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <timer name="pit" tickpolicy="delay"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <timer name="hpet" present="no"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  </clock>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  <cpu mode="host-model" match="exact">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <topology sockets="1" cores="1" threads="1"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  </cpu>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  <devices>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <disk type="network" device="cdrom">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <driver type="raw" cache="none"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="vms/3f57226b-3e1a-4f83-9b96-9b5a7ff37910_disk.config">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      </source>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      </auth>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <target dev="sda" bus="sata"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    </disk>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <disk type="network" device="disk">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <source protocol="rbd" name="volumes/volume-f09bbfa0-0d4a-4b1f-8cbb-7a3fdb92cde9">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:        <host name="192.168.122.100" port="6789"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      </source>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <auth username="openstack">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:        <secret type="ceph" uuid="92cfe4d4-4917-5be1-9d00-73758793a62b"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      </auth>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <target dev="vda" bus="virtio"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <serial>f09bbfa0-0d4a-4b1f-8cbb-7a3fdb92cde9</serial>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <encryption format="luks">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:        <secret type="passphrase" uuid="c24432b9-a795-4106-a8ac-073a7925ebb0"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      </encryption>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    </disk>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <interface type="ethernet">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <mac address="fa:16:3e:55:04:a0"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <driver name="vhost" rx_queue_size="512"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <mtu size="1442"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <target dev="tap1cecff65-5d"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    </interface>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <serial type="pty">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <log file="/var/lib/nova/instances/3f57226b-3e1a-4f83-9b96-9b5a7ff37910/console.log" append="off"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    </serial>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <video>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <model type="virtio"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    </video>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <input type="tablet" bus="usb"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <rng model="virtio">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <backend model="random">/dev/urandom</backend>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    </rng>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="pci" model="pcie-root-port"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <controller type="usb" index="0"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    <memballoon model="virtio">
Oct 11 00:07:40 np0005480824 nova_compute[260089]:      <stats period="10"/>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:    </memballoon>
Oct 11 00:07:40 np0005480824 nova_compute[260089]:  </devices>
Oct 11 00:07:40 np0005480824 nova_compute[260089]: </domain>
Oct 11 00:07:40 np0005480824 nova_compute[260089]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.100 2 DEBUG nova.compute.manager [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Preparing to wait for external event network-vif-plugged-1cecff65-5dca-4e92-9f18-a4729f87c434 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.100 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.101 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.101 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.101 2 DEBUG nova.virt.libvirt.vif [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T04:07:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-2054399998',display_name='tempest-TestEncryptedCinderVolumes-server-2054399998',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-2054399998',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNT5nRmfgUQpJQihppMhJ/PJtl2PXt4LF+4fCTR7CvYlNKAHH53rCj1YReitA5DOkjFToqvFLFWF74Q9GO2rD7zoT+ufORFGj1sd+RhwvHNqWv6rQH+IM1H5SH+IwmGWpQ==',key_name='tempest-TestEncryptedCinderVolumes-517148477',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f367c6c5e8f479399a2004c82cfaff0',ramdisk_id='',reservation_id='r-ug5fzirc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-781713731',owner_user_name='tempest-TestEncryptedCinderVolumes-781713731-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T04:07:32Z,user_data=None,user_id='f9202e7d8882475ba6a769d9c59c35fd',uuid=3f57226b-3e1a-4f83-9b96-9b5a7ff37910,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cecff65-5dca-4e92-9f18-a4729f87c434", "address": "fa:16:3e:55:04:a0", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cecff65-5d", "ovs_interfaceid": "1cecff65-5dca-4e92-9f18-a4729f87c434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.102 2 DEBUG nova.network.os_vif_util [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converting VIF {"id": "1cecff65-5dca-4e92-9f18-a4729f87c434", "address": "fa:16:3e:55:04:a0", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cecff65-5d", "ovs_interfaceid": "1cecff65-5dca-4e92-9f18-a4729f87c434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.102 2 DEBUG nova.network.os_vif_util [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=1cecff65-5dca-4e92-9f18-a4729f87c434,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cecff65-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.103 2 DEBUG os_vif [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=1cecff65-5dca-4e92-9f18-a4729f87c434,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cecff65-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.104 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.104 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.107 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1cecff65-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.107 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1cecff65-5d, col_values=(('external_ids', {'iface-id': '1cecff65-5dca-4e92-9f18-a4729f87c434', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:04:a0', 'vm-uuid': '3f57226b-3e1a-4f83-9b96-9b5a7ff37910'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:07:40 np0005480824 NetworkManager[44969]: <info>  [1760155660.1104] manager: (tap1cecff65-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.116 2 INFO os_vif [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=1cecff65-5dca-4e92-9f18-a4729f87c434,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cecff65-5d')#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.157 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.157 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.157 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] No VIF found with MAC fa:16:3e:55:04:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.158 2 INFO nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Using config drive#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.178 2 DEBUG nova.storage.rbd_utils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] rbd image 3f57226b-3e1a-4f83-9b96-9b5a7ff37910_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.854 2 DEBUG nova.network.neutron [req-c8df7a92-d3f3-428e-8215-b3e7f745a768 req-8a8c4498-d0a1-4a27-b041-2bd0c3723a79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Updated VIF entry in instance network info cache for port 1cecff65-5dca-4e92-9f18-a4729f87c434. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.855 2 DEBUG nova.network.neutron [req-c8df7a92-d3f3-428e-8215-b3e7f745a768 req-8a8c4498-d0a1-4a27-b041-2bd0c3723a79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Updating instance_info_cache with network_info: [{"id": "1cecff65-5dca-4e92-9f18-a4729f87c434", "address": "fa:16:3e:55:04:a0", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cecff65-5d", "ovs_interfaceid": "1cecff65-5dca-4e92-9f18-a4729f87c434", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.879 2 DEBUG oslo_concurrency.lockutils [req-c8df7a92-d3f3-428e-8215-b3e7f745a768 req-8a8c4498-d0a1-4a27-b041-2bd0c3723a79 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-3f57226b-3e1a-4f83-9b96-9b5a7ff37910" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.939 2 INFO nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Creating config drive at /var/lib/nova/instances/3f57226b-3e1a-4f83-9b96-9b5a7ff37910/disk.config#033[00m
Oct 11 00:07:40 np0005480824 nova_compute[260089]: 2025-10-11 04:07:40.944 2 DEBUG oslo_concurrency.processutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f57226b-3e1a-4f83-9b96-9b5a7ff37910/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3hcqhb5n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.069 2 DEBUG oslo_concurrency.processutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f57226b-3e1a-4f83-9b96-9b5a7ff37910/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3hcqhb5n" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.091 2 DEBUG nova.storage.rbd_utils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] rbd image 3f57226b-3e1a-4f83-9b96-9b5a7ff37910_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.096 2 DEBUG oslo_concurrency.processutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f57226b-3e1a-4f83-9b96-9b5a7ff37910/disk.config 3f57226b-3e1a-4f83-9b96-9b5a7ff37910_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:07:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.302 2 DEBUG oslo_concurrency.processutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f57226b-3e1a-4f83-9b96-9b5a7ff37910/disk.config 3f57226b-3e1a-4f83-9b96-9b5a7ff37910_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.303 2 INFO nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Deleting local config drive /var/lib/nova/instances/3f57226b-3e1a-4f83-9b96-9b5a7ff37910/disk.config because it was imported into RBD.#033[00m
Oct 11 00:07:41 np0005480824 kernel: tap1cecff65-5d: entered promiscuous mode
Oct 11 00:07:41 np0005480824 NetworkManager[44969]: <info>  [1760155661.3642] manager: (tap1cecff65-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/147)
Oct 11 00:07:41 np0005480824 systemd-udevd[306461]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:41 np0005480824 ovn_controller[152667]: 2025-10-11T04:07:41Z|00275|binding|INFO|Claiming lport 1cecff65-5dca-4e92-9f18-a4729f87c434 for this chassis.
Oct 11 00:07:41 np0005480824 ovn_controller[152667]: 2025-10-11T04:07:41Z|00276|binding|INFO|1cecff65-5dca-4e92-9f18-a4729f87c434: Claiming fa:16:3e:55:04:a0 10.100.0.14
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.429 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:04:a0 10.100.0.14'], port_security=['fa:16:3e:55:04:a0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3f57226b-3e1a-4f83-9b96-9b5a7ff37910', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f367c6c5e8f479399a2004c82cfaff0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78484826-fa6d-47e8-af8c-1b198aee6eb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b37e59a3-7c4f-47c2-acd9-d3f9dd8c5f52, chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=1cecff65-5dca-4e92-9f18-a4729f87c434) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.430 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 1cecff65-5dca-4e92-9f18-a4729f87c434 in datapath abadcf46-9a41-4911-85e0-fbcde2d48b79 bound to our chassis#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.431 162245 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abadcf46-9a41-4911-85e0-fbcde2d48b79#033[00m
Oct 11 00:07:41 np0005480824 NetworkManager[44969]: <info>  [1760155661.4409] device (tap1cecff65-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 00:07:41 np0005480824 NetworkManager[44969]: <info>  [1760155661.4420] device (tap1cecff65-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 00:07:41 np0005480824 ovn_controller[152667]: 2025-10-11T04:07:41Z|00277|binding|INFO|Setting lport 1cecff65-5dca-4e92-9f18-a4729f87c434 ovn-installed in OVS
Oct 11 00:07:41 np0005480824 ovn_controller[152667]: 2025-10-11T04:07:41Z|00278|binding|INFO|Setting lport 1cecff65-5dca-4e92-9f18-a4729f87c434 up in Southbound
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.449 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4a662b-69e8-42da-b1e2-fa5bece13586]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.450 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabadcf46-91 in ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.451 267859 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabadcf46-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.452 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[fa5c63a7-3463-4942-b7b9-d0ff900258f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.452 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[35182e4c-27a7-40c1-8bec-ae509460991d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 systemd-machined[215071]: New machine qemu-30-instance-0000001e.
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.464 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[450eed72-dd15-4b45-9252-24aacb0a102e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 systemd[1]: Started Virtual Machine qemu-30-instance-0000001e.
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.478 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[55590ce9-fc1c-4bab-b783-926717fa9dee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.505 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[dec7e7cf-c7ca-4626-8be2-bc38e2efc1bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.509 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7dab26-f1e7-436e-b9e4-a4835f7f2336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 NetworkManager[44969]: <info>  [1760155661.5099] manager: (tapabadcf46-90): new Veth device (/org/freedesktop/NetworkManager/Devices/148)
Oct 11 00:07:41 np0005480824 systemd-udevd[306466]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.535 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[071e0bda-316a-4f76-afaf-91e226af4313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.540 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[c5389ec9-fc65-44af-9eb3-5663cbf028b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 NetworkManager[44969]: <info>  [1760155661.5589] device (tapabadcf46-90): carrier: link connected
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.563 268023 DEBUG oslo.privsep.daemon [-] privsep: reply[1de8dfe2-8c9b-4218-984c-00e8dffabd49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.577 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ab602708-b6fd-4364-ab9f-6715ec2d40af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabadcf46-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:c9:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512921, 'reachable_time': 41312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306497, 'error': None, 'target': 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.591 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[98ec4b93-c96f-40f1-890c-78548ffc3690]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe72:c9bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512921, 'tstamp': 512921}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306498, 'error': None, 'target': 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.605 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ddaeae32-e98e-4ed4-ac4e-59e8d1649456]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabadcf46-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:c9:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512921, 'reachable_time': 41312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306499, 'error': None, 'target': 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.630 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[018398fc-c64b-4731-9c76-697653966f32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.696 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[bb615ab7-81a3-4b04-a9ce-6d079bb825ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.698 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabadcf46-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.698 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.698 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabadcf46-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:41 np0005480824 kernel: tapabadcf46-90: entered promiscuous mode
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:41 np0005480824 NetworkManager[44969]: <info>  [1760155661.7014] manager: (tapabadcf46-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.704 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabadcf46-90, col_values=(('external_ids', {'iface-id': '7b1d2367-bac7-4671-94ac-6b3206b5485c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:41 np0005480824 ovn_controller[152667]: 2025-10-11T04:07:41Z|00279|binding|INFO|Releasing lport 7b1d2367-bac7-4671-94ac-6b3206b5485c from this chassis (sb_readonly=0)
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.707 162245 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abadcf46-9a41-4911-85e0-fbcde2d48b79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abadcf46-9a41-4911-85e0-fbcde2d48b79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.708 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[56fda1cf-b803-4916-9386-c831bd4c6949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.709 162245 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: global
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    log         /dev/log local0 debug
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    log-tag     haproxy-metadata-proxy-abadcf46-9a41-4911-85e0-fbcde2d48b79
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    user        root
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    group       root
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    maxconn     1024
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    pidfile     /var/lib/neutron/external/pids/abadcf46-9a41-4911-85e0-fbcde2d48b79.pid.haproxy
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    daemon
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: defaults
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    log global
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    mode http
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    option httplog
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    option dontlognull
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    option http-server-close
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    option forwardfor
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    retries                 3
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    timeout http-request    30s
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    timeout connect         30s
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    timeout client          32s
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    timeout server          32s
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    timeout http-keep-alive 30s
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: listen listener
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    bind 169.254.169.254:80
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    server metadata /var/lib/neutron/metadata_proxy
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]:    http-request add-header X-OVN-Network-ID abadcf46-9a41-4911-85e0-fbcde2d48b79
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 11 00:07:41 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:07:41.709 162245 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'env', 'PROCESS_TAG=haproxy-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abadcf46-9a41-4911-85e0-fbcde2d48b79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.805 2 DEBUG nova.compute.manager [req-579bb193-1dfe-45bb-8493-f64bc36e314e req-05ad1243-4de0-4b4a-8b5c-3071ece46dda 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Received event network-vif-plugged-1cecff65-5dca-4e92-9f18-a4729f87c434 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.806 2 DEBUG oslo_concurrency.lockutils [req-579bb193-1dfe-45bb-8493-f64bc36e314e req-05ad1243-4de0-4b4a-8b5c-3071ece46dda 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.806 2 DEBUG oslo_concurrency.lockutils [req-579bb193-1dfe-45bb-8493-f64bc36e314e req-05ad1243-4de0-4b4a-8b5c-3071ece46dda 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.806 2 DEBUG oslo_concurrency.lockutils [req-579bb193-1dfe-45bb-8493-f64bc36e314e req-05ad1243-4de0-4b4a-8b5c-3071ece46dda 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:41 np0005480824 nova_compute[260089]: 2025-10-11 04:07:41.807 2 DEBUG nova.compute.manager [req-579bb193-1dfe-45bb-8493-f64bc36e314e req-05ad1243-4de0-4b4a-8b5c-3071ece46dda 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Processing event network-vif-plugged-1cecff65-5dca-4e92-9f18-a4729f87c434 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 11 00:07:42 np0005480824 podman[306564]: 2025-10-11 04:07:42.082271752 +0000 UTC m=+0.051552107 container create d829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 00:07:42 np0005480824 systemd[1]: Started libpod-conmon-d829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513.scope.
Oct 11 00:07:42 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:07:42 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5666eefa3b84cc149df7e147f6001c723e1e56260e5b49e7958fc95c78659db/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 00:07:42 np0005480824 podman[306564]: 2025-10-11 04:07:42.051415406 +0000 UTC m=+0.020695771 image pull 1061e4fafe13e0b9aa1ef2c904ba4ad70c44f3e87b1d831f16c6db34937f4022 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 11 00:07:42 np0005480824 podman[306564]: 2025-10-11 04:07:42.172077744 +0000 UTC m=+0.141358119 container init d829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 11 00:07:42 np0005480824 podman[306564]: 2025-10-11 04:07:42.177141692 +0000 UTC m=+0.146422047 container start d829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:07:42 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[306582]: [NOTICE]   (306586) : New worker (306588) forked
Oct 11 00:07:42 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[306582]: [NOTICE]   (306586) : Loading success.
Oct 11 00:07:42 np0005480824 nova_compute[260089]: 2025-10-11 04:07:42.292 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:07:42 np0005480824 nova_compute[260089]: 2025-10-11 04:07:42.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:07:42 np0005480824 nova_compute[260089]: 2025-10-11 04:07:42.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:07:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 31 op/s
Oct 11 00:07:43 np0005480824 nova_compute[260089]: 2025-10-11 04:07:43.937 2 DEBUG nova.compute.manager [req-23851f29-11fe-44d8-9aa7-ab72d096eee1 req-931d31a4-c83e-49d2-a7a2-c9b861ac2bda 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Received event network-vif-plugged-1cecff65-5dca-4e92-9f18-a4729f87c434 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:07:43 np0005480824 nova_compute[260089]: 2025-10-11 04:07:43.938 2 DEBUG oslo_concurrency.lockutils [req-23851f29-11fe-44d8-9aa7-ab72d096eee1 req-931d31a4-c83e-49d2-a7a2-c9b861ac2bda 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:07:43 np0005480824 nova_compute[260089]: 2025-10-11 04:07:43.939 2 DEBUG oslo_concurrency.lockutils [req-23851f29-11fe-44d8-9aa7-ab72d096eee1 req-931d31a4-c83e-49d2-a7a2-c9b861ac2bda 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:07:43 np0005480824 nova_compute[260089]: 2025-10-11 04:07:43.939 2 DEBUG oslo_concurrency.lockutils [req-23851f29-11fe-44d8-9aa7-ab72d096eee1 req-931d31a4-c83e-49d2-a7a2-c9b861ac2bda 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:43 np0005480824 nova_compute[260089]: 2025-10-11 04:07:43.940 2 DEBUG nova.compute.manager [req-23851f29-11fe-44d8-9aa7-ab72d096eee1 req-931d31a4-c83e-49d2-a7a2-c9b861ac2bda 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] No waiting events found dispatching network-vif-plugged-1cecff65-5dca-4e92-9f18-a4729f87c434 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:07:43 np0005480824 nova_compute[260089]: 2025-10-11 04:07:43.940 2 WARNING nova.compute.manager [req-23851f29-11fe-44d8-9aa7-ab72d096eee1 req-931d31a4-c83e-49d2-a7a2-c9b861ac2bda 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Received unexpected event network-vif-plugged-1cecff65-5dca-4e92-9f18-a4729f87c434 for instance with vm_state building and task_state spawning.#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.832 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155664.8313775, 3f57226b-3e1a-4f83-9b96-9b5a7ff37910 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.832 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] VM Started (Lifecycle Event)#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.834 2 DEBUG nova.compute.manager [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.838 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.842 2 INFO nova.virt.libvirt.driver [-] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Instance spawned successfully.#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.842 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.869 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.877 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.885 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.886 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.886 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.887 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.887 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.888 2 DEBUG nova.virt.libvirt.driver [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.923 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.924 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155664.831654, 3f57226b-3e1a-4f83-9b96-9b5a7ff37910 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.924 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] VM Paused (Lifecycle Event)#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.951 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.956 2 DEBUG nova.virt.driver [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] Emitting event <LifecycleEvent: 1760155664.8374164, 3f57226b-3e1a-4f83-9b96-9b5a7ff37910 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.956 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] VM Resumed (Lifecycle Event)#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.978 2 INFO nova.compute.manager [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Took 10.84 seconds to spawn the instance on the hypervisor.#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.979 2 DEBUG nova.compute.manager [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.981 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:07:44 np0005480824 nova_compute[260089]: 2025-10-11 04:07:44.992 2 DEBUG nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 11 00:07:45 np0005480824 nova_compute[260089]: 2025-10-11 04:07:45.023 2 INFO nova.compute.manager [None req-88a6441e-7be1-4e8c-8b2d-f6d0d554a1cc - - - - - -] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 11 00:07:45 np0005480824 nova_compute[260089]: 2025-10-11 04:07:45.079 2 INFO nova.compute.manager [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Took 13.08 seconds to build instance.#033[00m
Oct 11 00:07:45 np0005480824 nova_compute[260089]: 2025-10-11 04:07:45.095 2 DEBUG oslo_concurrency.lockutils [None req-e16b30f0-de49-4ec3-bbaf-3bf73b0f5255 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:45 np0005480824 nova_compute[260089]: 2025-10-11 04:07:45.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 7.8 KiB/s rd, 13 KiB/s wr, 10 op/s
Oct 11 00:07:45 np0005480824 nova_compute[260089]: 2025-10-11 04:07:45.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:07:45 np0005480824 nova_compute[260089]: 2025-10-11 04:07:45.326 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:07:45 np0005480824 nova_compute[260089]: 2025-10-11 04:07:45.326 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:07:45 np0005480824 nova_compute[260089]: 2025-10-11 04:07:45.326 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:45 np0005480824 nova_compute[260089]: 2025-10-11 04:07:45.327 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 00:07:45 np0005480824 nova_compute[260089]: 2025-10-11 04:07:45.327 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:07:45 np0005480824 nova_compute[260089]: 2025-10-11 04:07:45.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:07:45 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4105243909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:07:45 np0005480824 nova_compute[260089]: 2025-10-11 04:07:45.753 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:07:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:07:45 np0005480824 nova_compute[260089]: 2025-10-11 04:07:45.855 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 00:07:45 np0005480824 nova_compute[260089]: 2025-10-11 04:07:45.855 2 DEBUG nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 11 00:07:46 np0005480824 nova_compute[260089]: 2025-10-11 04:07:46.025 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:07:46 np0005480824 nova_compute[260089]: 2025-10-11 04:07:46.026 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4233MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 00:07:46 np0005480824 nova_compute[260089]: 2025-10-11 04:07:46.027 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:07:46 np0005480824 nova_compute[260089]: 2025-10-11 04:07:46.027 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:07:46 np0005480824 nova_compute[260089]: 2025-10-11 04:07:46.113 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Instance 3f57226b-3e1a-4f83-9b96-9b5a7ff37910 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 11 00:07:46 np0005480824 nova_compute[260089]: 2025-10-11 04:07:46.114 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 00:07:46 np0005480824 nova_compute[260089]: 2025-10-11 04:07:46.115 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 00:07:46 np0005480824 nova_compute[260089]: 2025-10-11 04:07:46.154 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:07:46 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:07:46 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3103374087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:07:46 np0005480824 nova_compute[260089]: 2025-10-11 04:07:46.581 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:07:46 np0005480824 nova_compute[260089]: 2025-10-11 04:07:46.588 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:07:46 np0005480824 nova_compute[260089]: 2025-10-11 04:07:46.608 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:07:46 np0005480824 nova_compute[260089]: 2025-10-11 04:07:46.636 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 00:07:46 np0005480824 nova_compute[260089]: 2025-10-11 04:07:46.636 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:07:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 13 KiB/s wr, 57 op/s
Oct 11 00:07:48 np0005480824 nova_compute[260089]: 2025-10-11 04:07:48.638 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:07:48 np0005480824 nova_compute[260089]: 2025-10-11 04:07:48.639 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 00:07:48 np0005480824 nova_compute[260089]: 2025-10-11 04:07:48.640 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 00:07:48 np0005480824 nova_compute[260089]: 2025-10-11 04:07:48.870 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "refresh_cache-3f57226b-3e1a-4f83-9b96-9b5a7ff37910" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:07:48 np0005480824 nova_compute[260089]: 2025-10-11 04:07:48.872 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquired lock "refresh_cache-3f57226b-3e1a-4f83-9b96-9b5a7ff37910" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:07:48 np0005480824 nova_compute[260089]: 2025-10-11 04:07:48.872 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 11 00:07:48 np0005480824 nova_compute[260089]: 2025-10-11 04:07:48.873 2 DEBUG nova.objects.instance [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3f57226b-3e1a-4f83-9b96-9b5a7ff37910 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:07:48 np0005480824 nova_compute[260089]: 2025-10-11 04:07:48.879 2 DEBUG nova.compute.manager [req-7097fe7f-3f1a-4188-aef2-36ba5ae50954 req-4f858867-5300-4789-a8f7-fc224676f68e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Received event network-changed-1cecff65-5dca-4e92-9f18-a4729f87c434 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:07:48 np0005480824 nova_compute[260089]: 2025-10-11 04:07:48.880 2 DEBUG nova.compute.manager [req-7097fe7f-3f1a-4188-aef2-36ba5ae50954 req-4f858867-5300-4789-a8f7-fc224676f68e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Refreshing instance network info cache due to event network-changed-1cecff65-5dca-4e92-9f18-a4729f87c434. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 11 00:07:48 np0005480824 nova_compute[260089]: 2025-10-11 04:07:48.881 2 DEBUG oslo_concurrency.lockutils [req-7097fe7f-3f1a-4188-aef2-36ba5ae50954 req-4f858867-5300-4789-a8f7-fc224676f68e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "refresh_cache-3f57226b-3e1a-4f83-9b96-9b5a7ff37910" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 11 00:07:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 11 00:07:50 np0005480824 podman[306649]: 2025-10-11 04:07:50.023385549 +0000 UTC m=+0.081450370 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Oct 11 00:07:50 np0005480824 podman[306650]: 2025-10-11 04:07:50.026133542 +0000 UTC m=+0.072181535 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 00:07:50 np0005480824 nova_compute[260089]: 2025-10-11 04:07:50.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:50 np0005480824 nova_compute[260089]: 2025-10-11 04:07:50.308 2 DEBUG nova.network.neutron [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Updating instance_info_cache with network_info: [{"id": "1cecff65-5dca-4e92-9f18-a4729f87c434", "address": "fa:16:3e:55:04:a0", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cecff65-5d", "ovs_interfaceid": "1cecff65-5dca-4e92-9f18-a4729f87c434", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:07:50 np0005480824 nova_compute[260089]: 2025-10-11 04:07:50.335 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Releasing lock "refresh_cache-3f57226b-3e1a-4f83-9b96-9b5a7ff37910" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:07:50 np0005480824 nova_compute[260089]: 2025-10-11 04:07:50.335 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 11 00:07:50 np0005480824 nova_compute[260089]: 2025-10-11 04:07:50.336 2 DEBUG oslo_concurrency.lockutils [req-7097fe7f-3f1a-4188-aef2-36ba5ae50954 req-4f858867-5300-4789-a8f7-fc224676f68e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquired lock "refresh_cache-3f57226b-3e1a-4f83-9b96-9b5a7ff37910" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 11 00:07:50 np0005480824 nova_compute[260089]: 2025-10-11 04:07:50.336 2 DEBUG nova.network.neutron [req-7097fe7f-3f1a-4188-aef2-36ba5ae50954 req-4f858867-5300-4789-a8f7-fc224676f68e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Refreshing network info cache for port 1cecff65-5dca-4e92-9f18-a4729f87c434 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 11 00:07:50 np0005480824 nova_compute[260089]: 2025-10-11 04:07:50.338 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:07:50 np0005480824 nova_compute[260089]: 2025-10-11 04:07:50.339 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:07:50 np0005480824 nova_compute[260089]: 2025-10-11 04:07:50.340 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:07:50 np0005480824 nova_compute[260089]: 2025-10-11 04:07:50.341 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 00:07:50 np0005480824 nova_compute[260089]: 2025-10-11 04:07:50.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:07:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 11 00:07:51 np0005480824 nova_compute[260089]: 2025-10-11 04:07:51.535 2 DEBUG nova.network.neutron [req-7097fe7f-3f1a-4188-aef2-36ba5ae50954 req-4f858867-5300-4789-a8f7-fc224676f68e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Updated VIF entry in instance network info cache for port 1cecff65-5dca-4e92-9f18-a4729f87c434. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 11 00:07:51 np0005480824 nova_compute[260089]: 2025-10-11 04:07:51.536 2 DEBUG nova.network.neutron [req-7097fe7f-3f1a-4188-aef2-36ba5ae50954 req-4f858867-5300-4789-a8f7-fc224676f68e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Updating instance_info_cache with network_info: [{"id": "1cecff65-5dca-4e92-9f18-a4729f87c434", "address": "fa:16:3e:55:04:a0", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cecff65-5d", "ovs_interfaceid": "1cecff65-5dca-4e92-9f18-a4729f87c434", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:07:51 np0005480824 nova_compute[260089]: 2025-10-11 04:07:51.553 2 DEBUG oslo_concurrency.lockutils [req-7097fe7f-3f1a-4188-aef2-36ba5ae50954 req-4f858867-5300-4789-a8f7-fc224676f68e 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Releasing lock "refresh_cache-3f57226b-3e1a-4f83-9b96-9b5a7ff37910" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 11 00:07:52 np0005480824 nova_compute[260089]: 2025-10-11 04:07:52.300 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:07:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:07:53 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 83fe575c-50cf-49b0-9c4d-36cd34afb586 does not exist
Oct 11 00:07:53 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 883aeaab-e7f4-4b4a-b676-4a40771825ce does not exist
Oct 11 00:07:53 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev f3b95003-f712-4c7a-b4b4-7df5983942b1 does not exist
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:07:53 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:07:53 np0005480824 podman[306962]: 2025-10-11 04:07:53.695578636 +0000 UTC m=+0.042317433 container create 70f3c75badf8d49837b9134b4f7dc90bbc57faa51a0f033f18c095c6da6f90c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct 11 00:07:53 np0005480824 systemd[1]: Started libpod-conmon-70f3c75badf8d49837b9134b4f7dc90bbc57faa51a0f033f18c095c6da6f90c2.scope.
Oct 11 00:07:53 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:07:53 np0005480824 podman[306962]: 2025-10-11 04:07:53.675705685 +0000 UTC m=+0.022444472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:07:53 np0005480824 podman[306962]: 2025-10-11 04:07:53.86300222 +0000 UTC m=+0.209740997 container init 70f3c75badf8d49837b9134b4f7dc90bbc57faa51a0f033f18c095c6da6f90c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:07:53 np0005480824 podman[306962]: 2025-10-11 04:07:53.872222104 +0000 UTC m=+0.218960861 container start 70f3c75badf8d49837b9134b4f7dc90bbc57faa51a0f033f18c095c6da6f90c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 11 00:07:53 np0005480824 vigorous_benz[306978]: 167 167
Oct 11 00:07:53 np0005480824 systemd[1]: libpod-70f3c75badf8d49837b9134b4f7dc90bbc57faa51a0f033f18c095c6da6f90c2.scope: Deactivated successfully.
Oct 11 00:07:53 np0005480824 podman[306962]: 2025-10-11 04:07:53.922150931 +0000 UTC m=+0.268889688 container attach 70f3c75badf8d49837b9134b4f7dc90bbc57faa51a0f033f18c095c6da6f90c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 00:07:53 np0005480824 podman[306962]: 2025-10-11 04:07:53.923121084 +0000 UTC m=+0.269859851 container died 70f3c75badf8d49837b9134b4f7dc90bbc57faa51a0f033f18c095c6da6f90c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 00:07:54 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9ba1c006bca50cfbb7708b9cfeda94e93ef4812bec26eff0dec1ddc5fb177a08-merged.mount: Deactivated successfully.
Oct 11 00:07:54 np0005480824 nova_compute[260089]: 2025-10-11 04:07:54.293 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:07:54 np0005480824 podman[306962]: 2025-10-11 04:07:54.436553494 +0000 UTC m=+0.783292271 container remove 70f3c75badf8d49837b9134b4f7dc90bbc57faa51a0f033f18c095c6da6f90c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 00:07:54 np0005480824 systemd[1]: libpod-conmon-70f3c75badf8d49837b9134b4f7dc90bbc57faa51a0f033f18c095c6da6f90c2.scope: Deactivated successfully.
Oct 11 00:07:54 np0005480824 podman[307002]: 2025-10-11 04:07:54.65239282 +0000 UTC m=+0.052823507 container create cb02a26c33ce7267d918e3c1c881e862ca4b4178873a35bb461b03d945d61e3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:07:54 np0005480824 systemd[1]: Started libpod-conmon-cb02a26c33ce7267d918e3c1c881e862ca4b4178873a35bb461b03d945d61e3b.scope.
Oct 11 00:07:54 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:07:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a56f704c95b8dcb7398048dcf1407e599f11f9f4ae5b40e319571431612f1681/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:07:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a56f704c95b8dcb7398048dcf1407e599f11f9f4ae5b40e319571431612f1681/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:07:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a56f704c95b8dcb7398048dcf1407e599f11f9f4ae5b40e319571431612f1681/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:07:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a56f704c95b8dcb7398048dcf1407e599f11f9f4ae5b40e319571431612f1681/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:07:54 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a56f704c95b8dcb7398048dcf1407e599f11f9f4ae5b40e319571431612f1681/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 00:07:54 np0005480824 podman[307002]: 2025-10-11 04:07:54.7248372 +0000 UTC m=+0.125267937 container init cb02a26c33ce7267d918e3c1c881e862ca4b4178873a35bb461b03d945d61e3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_robinson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 00:07:54 np0005480824 podman[307002]: 2025-10-11 04:07:54.631571727 +0000 UTC m=+0.032002454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:07:54 np0005480824 podman[307002]: 2025-10-11 04:07:54.732935858 +0000 UTC m=+0.133366545 container start cb02a26c33ce7267d918e3c1c881e862ca4b4178873a35bb461b03d945d61e3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 00:07:54 np0005480824 podman[307002]: 2025-10-11 04:07:54.73865168 +0000 UTC m=+0.139082387 container attach cb02a26c33ce7267d918e3c1c881e862ca4b4178873a35bb461b03d945d61e3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 00:07:55 np0005480824 nova_compute[260089]: 2025-10-11 04:07:55.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Oct 11 00:07:55 np0005480824 nova_compute[260089]: 2025-10-11 04:07:55.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:07:55 np0005480824 heuristic_robinson[307018]: --> passed data devices: 0 physical, 3 LVM
Oct 11 00:07:55 np0005480824 heuristic_robinson[307018]: --> relative data size: 1.0
Oct 11 00:07:55 np0005480824 heuristic_robinson[307018]: --> All data devices are unavailable
Oct 11 00:07:55 np0005480824 systemd[1]: libpod-cb02a26c33ce7267d918e3c1c881e862ca4b4178873a35bb461b03d945d61e3b.scope: Deactivated successfully.
Oct 11 00:07:55 np0005480824 podman[307002]: 2025-10-11 04:07:55.778412958 +0000 UTC m=+1.178843665 container died cb02a26c33ce7267d918e3c1c881e862ca4b4178873a35bb461b03d945d61e3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:07:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:07:55 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a56f704c95b8dcb7398048dcf1407e599f11f9f4ae5b40e319571431612f1681-merged.mount: Deactivated successfully.
Oct 11 00:07:55 np0005480824 podman[307002]: 2025-10-11 04:07:55.834897438 +0000 UTC m=+1.235328125 container remove cb02a26c33ce7267d918e3c1c881e862ca4b4178873a35bb461b03d945d61e3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 00:07:55 np0005480824 systemd[1]: libpod-conmon-cb02a26c33ce7267d918e3c1c881e862ca4b4178873a35bb461b03d945d61e3b.scope: Deactivated successfully.
Oct 11 00:07:56 np0005480824 podman[307198]: 2025-10-11 04:07:56.495136473 +0000 UTC m=+0.049889909 container create 3377ece0a45cd78afd55de9c63e9ddb6d58168f98f2a010451114f15baee7242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:07:56 np0005480824 systemd[1]: Started libpod-conmon-3377ece0a45cd78afd55de9c63e9ddb6d58168f98f2a010451114f15baee7242.scope.
Oct 11 00:07:56 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:07:56 np0005480824 podman[307198]: 2025-10-11 04:07:56.469376566 +0000 UTC m=+0.024130052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:07:56 np0005480824 podman[307198]: 2025-10-11 04:07:56.58725302 +0000 UTC m=+0.142006476 container init 3377ece0a45cd78afd55de9c63e9ddb6d58168f98f2a010451114f15baee7242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_albattani, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 00:07:56 np0005480824 podman[307198]: 2025-10-11 04:07:56.59504758 +0000 UTC m=+0.149801016 container start 3377ece0a45cd78afd55de9c63e9ddb6d58168f98f2a010451114f15baee7242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_albattani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:07:56 np0005480824 magical_albattani[307214]: 167 167
Oct 11 00:07:56 np0005480824 systemd[1]: libpod-3377ece0a45cd78afd55de9c63e9ddb6d58168f98f2a010451114f15baee7242.scope: Deactivated successfully.
Oct 11 00:07:56 np0005480824 podman[307198]: 2025-10-11 04:07:56.61096155 +0000 UTC m=+0.165715016 container attach 3377ece0a45cd78afd55de9c63e9ddb6d58168f98f2a010451114f15baee7242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_albattani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:07:56 np0005480824 podman[307198]: 2025-10-11 04:07:56.611351358 +0000 UTC m=+0.166104794 container died 3377ece0a45cd78afd55de9c63e9ddb6d58168f98f2a010451114f15baee7242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_albattani, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 00:07:56 np0005480824 systemd[1]: var-lib-containers-storage-overlay-b35a8baf77340e19857fd83573665794c14080b440d4a0ac2aec25bd414ea7a2-merged.mount: Deactivated successfully.
Oct 11 00:07:56 np0005480824 podman[307198]: 2025-10-11 04:07:56.69031673 +0000 UTC m=+0.245070166 container remove 3377ece0a45cd78afd55de9c63e9ddb6d58168f98f2a010451114f15baee7242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_albattani, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:07:56 np0005480824 systemd[1]: libpod-conmon-3377ece0a45cd78afd55de9c63e9ddb6d58168f98f2a010451114f15baee7242.scope: Deactivated successfully.
Oct 11 00:07:56 np0005480824 podman[307240]: 2025-10-11 04:07:56.96150021 +0000 UTC m=+0.110212757 container create f4ad284713babc28404ff42762774ef08e115a8b814ff3e56bc3c711000050dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_black, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 11 00:07:56 np0005480824 podman[307240]: 2025-10-11 04:07:56.87914819 +0000 UTC m=+0.027860747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:07:57 np0005480824 systemd[1]: Started libpod-conmon-f4ad284713babc28404ff42762774ef08e115a8b814ff3e56bc3c711000050dc.scope.
Oct 11 00:07:57 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:07:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9968fe04482394c367434a3ad927a4fa11183f6f6e04dc4675b0888ff284548f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:07:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9968fe04482394c367434a3ad927a4fa11183f6f6e04dc4675b0888ff284548f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:07:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9968fe04482394c367434a3ad927a4fa11183f6f6e04dc4675b0888ff284548f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:07:57 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9968fe04482394c367434a3ad927a4fa11183f6f6e04dc4675b0888ff284548f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:07:57 np0005480824 podman[307240]: 2025-10-11 04:07:57.071102482 +0000 UTC m=+0.219815039 container init f4ad284713babc28404ff42762774ef08e115a8b814ff3e56bc3c711000050dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:07:57 np0005480824 podman[307240]: 2025-10-11 04:07:57.07743872 +0000 UTC m=+0.226151267 container start f4ad284713babc28404ff42762774ef08e115a8b814ff3e56bc3c711000050dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_black, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:07:57 np0005480824 podman[307240]: 2025-10-11 04:07:57.104101938 +0000 UTC m=+0.252814505 container attach f4ad284713babc28404ff42762774ef08e115a8b814ff3e56bc3c711000050dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_black, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 00:07:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 283 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.0 MiB/s wr, 98 op/s
Oct 11 00:07:57 np0005480824 ovn_controller[152667]: 2025-10-11T04:07:57Z|00074|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.14
Oct 11 00:07:57 np0005480824 ovn_controller[152667]: 2025-10-11T04:07:57Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:55:04:a0 10.100.0.14
Oct 11 00:07:57 np0005480824 strange_black[307256]: {
Oct 11 00:07:57 np0005480824 strange_black[307256]:    "0": [
Oct 11 00:07:57 np0005480824 strange_black[307256]:        {
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "devices": [
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "/dev/loop3"
Oct 11 00:07:57 np0005480824 strange_black[307256]:            ],
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_name": "ceph_lv0",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_size": "21470642176",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "name": "ceph_lv0",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "tags": {
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.cluster_name": "ceph",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.crush_device_class": "",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.encrypted": "0",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.osd_id": "0",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.type": "block",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.vdo": "0"
Oct 11 00:07:57 np0005480824 strange_black[307256]:            },
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "type": "block",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "vg_name": "ceph_vg0"
Oct 11 00:07:57 np0005480824 strange_black[307256]:        }
Oct 11 00:07:57 np0005480824 strange_black[307256]:    ],
Oct 11 00:07:57 np0005480824 strange_black[307256]:    "1": [
Oct 11 00:07:57 np0005480824 strange_black[307256]:        {
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "devices": [
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "/dev/loop4"
Oct 11 00:07:57 np0005480824 strange_black[307256]:            ],
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_name": "ceph_lv1",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_size": "21470642176",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "name": "ceph_lv1",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "tags": {
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.cluster_name": "ceph",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.crush_device_class": "",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.encrypted": "0",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.osd_id": "1",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.type": "block",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.vdo": "0"
Oct 11 00:07:57 np0005480824 strange_black[307256]:            },
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "type": "block",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "vg_name": "ceph_vg1"
Oct 11 00:07:57 np0005480824 strange_black[307256]:        }
Oct 11 00:07:57 np0005480824 strange_black[307256]:    ],
Oct 11 00:07:57 np0005480824 strange_black[307256]:    "2": [
Oct 11 00:07:57 np0005480824 strange_black[307256]:        {
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "devices": [
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "/dev/loop5"
Oct 11 00:07:57 np0005480824 strange_black[307256]:            ],
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_name": "ceph_lv2",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_size": "21470642176",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "name": "ceph_lv2",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "tags": {
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.cluster_name": "ceph",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.crush_device_class": "",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.encrypted": "0",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.osd_id": "2",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.type": "block",
Oct 11 00:07:57 np0005480824 strange_black[307256]:                "ceph.vdo": "0"
Oct 11 00:07:57 np0005480824 strange_black[307256]:            },
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "type": "block",
Oct 11 00:07:57 np0005480824 strange_black[307256]:            "vg_name": "ceph_vg2"
Oct 11 00:07:57 np0005480824 strange_black[307256]:        }
Oct 11 00:07:57 np0005480824 strange_black[307256]:    ]
Oct 11 00:07:57 np0005480824 strange_black[307256]: }
Oct 11 00:07:57 np0005480824 systemd[1]: libpod-f4ad284713babc28404ff42762774ef08e115a8b814ff3e56bc3c711000050dc.scope: Deactivated successfully.
Oct 11 00:07:57 np0005480824 podman[307240]: 2025-10-11 04:07:57.859995981 +0000 UTC m=+1.008708568 container died f4ad284713babc28404ff42762774ef08e115a8b814ff3e56bc3c711000050dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_black, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:07:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:07:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:07:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:07:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:07:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:07:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:07:57 np0005480824 systemd[1]: var-lib-containers-storage-overlay-9968fe04482394c367434a3ad927a4fa11183f6f6e04dc4675b0888ff284548f-merged.mount: Deactivated successfully.
Oct 11 00:07:58 np0005480824 podman[307240]: 2025-10-11 04:07:58.052779252 +0000 UTC m=+1.201491799 container remove f4ad284713babc28404ff42762774ef08e115a8b814ff3e56bc3c711000050dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 00:07:58 np0005480824 systemd[1]: libpod-conmon-f4ad284713babc28404ff42762774ef08e115a8b814ff3e56bc3c711000050dc.scope: Deactivated successfully.
Oct 11 00:07:58 np0005480824 podman[307266]: 2025-10-11 04:07:58.103461589 +0000 UTC m=+0.196938510 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 00:07:58 np0005480824 podman[307445]: 2025-10-11 04:07:58.717239735 +0000 UTC m=+0.042679670 container create 8c57a990d90b8b00f3c3d6da3b0cd21df2f24049c7b5619b47b33b6e67263e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:07:58 np0005480824 systemd[1]: Started libpod-conmon-8c57a990d90b8b00f3c3d6da3b0cd21df2f24049c7b5619b47b33b6e67263e49.scope.
Oct 11 00:07:58 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:07:58 np0005480824 podman[307445]: 2025-10-11 04:07:58.700255341 +0000 UTC m=+0.025695286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:07:58 np0005480824 podman[307445]: 2025-10-11 04:07:58.805206496 +0000 UTC m=+0.130646431 container init 8c57a990d90b8b00f3c3d6da3b0cd21df2f24049c7b5619b47b33b6e67263e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kare, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 00:07:58 np0005480824 podman[307445]: 2025-10-11 04:07:58.813093468 +0000 UTC m=+0.138533413 container start 8c57a990d90b8b00f3c3d6da3b0cd21df2f24049c7b5619b47b33b6e67263e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kare, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 00:07:58 np0005480824 recursing_kare[307461]: 167 167
Oct 11 00:07:58 np0005480824 systemd[1]: libpod-8c57a990d90b8b00f3c3d6da3b0cd21df2f24049c7b5619b47b33b6e67263e49.scope: Deactivated successfully.
Oct 11 00:07:58 np0005480824 podman[307445]: 2025-10-11 04:07:58.839794508 +0000 UTC m=+0.165234443 container attach 8c57a990d90b8b00f3c3d6da3b0cd21df2f24049c7b5619b47b33b6e67263e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kare, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 00:07:58 np0005480824 podman[307445]: 2025-10-11 04:07:58.840414212 +0000 UTC m=+0.165854157 container died 8c57a990d90b8b00f3c3d6da3b0cd21df2f24049c7b5619b47b33b6e67263e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:07:58 np0005480824 systemd[1]: var-lib-containers-storage-overlay-b6d0520e0e917e9fea0b791018f7bf9615473eae87ff65d2c03726c6f5044b82-merged.mount: Deactivated successfully.
Oct 11 00:07:58 np0005480824 podman[307445]: 2025-10-11 04:07:58.948264624 +0000 UTC m=+0.273704559 container remove 8c57a990d90b8b00f3c3d6da3b0cd21df2f24049c7b5619b47b33b6e67263e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kare, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:07:58 np0005480824 systemd[1]: libpod-conmon-8c57a990d90b8b00f3c3d6da3b0cd21df2f24049c7b5619b47b33b6e67263e49.scope: Deactivated successfully.
Oct 11 00:07:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 283 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 61 op/s
Oct 11 00:07:59 np0005480824 podman[307487]: 2025-10-11 04:07:59.183364827 +0000 UTC m=+0.078451040 container create bc9cd26d3bd7eaab3ad455ab9e60a3aa8a3e8d40413aa8f853f004c8866c8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_black, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 00:07:59 np0005480824 podman[307487]: 2025-10-11 04:07:59.145128951 +0000 UTC m=+0.040215214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:07:59 np0005480824 systemd[1]: Started libpod-conmon-bc9cd26d3bd7eaab3ad455ab9e60a3aa8a3e8d40413aa8f853f004c8866c8cae.scope.
Oct 11 00:07:59 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:07:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4091ed4f0b337f2bebad4e15551699c870df174bb81997c6e0128463f4ffbd21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:07:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4091ed4f0b337f2bebad4e15551699c870df174bb81997c6e0128463f4ffbd21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:07:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4091ed4f0b337f2bebad4e15551699c870df174bb81997c6e0128463f4ffbd21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:07:59 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4091ed4f0b337f2bebad4e15551699c870df174bb81997c6e0128463f4ffbd21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:07:59 np0005480824 podman[307487]: 2025-10-11 04:07:59.309047093 +0000 UTC m=+0.204133296 container init bc9cd26d3bd7eaab3ad455ab9e60a3aa8a3e8d40413aa8f853f004c8866c8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 00:07:59 np0005480824 podman[307487]: 2025-10-11 04:07:59.322364531 +0000 UTC m=+0.217450714 container start bc9cd26d3bd7eaab3ad455ab9e60a3aa8a3e8d40413aa8f853f004c8866c8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_black, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:07:59 np0005480824 podman[307487]: 2025-10-11 04:07:59.327502941 +0000 UTC m=+0.222589144 container attach bc9cd26d3bd7eaab3ad455ab9e60a3aa8a3e8d40413aa8f853f004c8866c8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_black, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 00:08:00 np0005480824 nova_compute[260089]: 2025-10-11 04:08:00.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:00 np0005480824 hopeful_black[307504]: {
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "osd_id": 0,
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "type": "bluestore"
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:    },
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "osd_id": 1,
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "type": "bluestore"
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:    },
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "osd_id": 2,
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:        "type": "bluestore"
Oct 11 00:08:00 np0005480824 hopeful_black[307504]:    }
Oct 11 00:08:00 np0005480824 hopeful_black[307504]: }
Oct 11 00:08:00 np0005480824 systemd[1]: libpod-bc9cd26d3bd7eaab3ad455ab9e60a3aa8a3e8d40413aa8f853f004c8866c8cae.scope: Deactivated successfully.
Oct 11 00:08:00 np0005480824 conmon[307504]: conmon bc9cd26d3bd7eaab3ad4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc9cd26d3bd7eaab3ad455ab9e60a3aa8a3e8d40413aa8f853f004c8866c8cae.scope/container/memory.events
Oct 11 00:08:00 np0005480824 podman[307487]: 2025-10-11 04:08:00.264879913 +0000 UTC m=+1.159966096 container died bc9cd26d3bd7eaab3ad455ab9e60a3aa8a3e8d40413aa8f853f004c8866c8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 00:08:00 np0005480824 systemd[1]: var-lib-containers-storage-overlay-4091ed4f0b337f2bebad4e15551699c870df174bb81997c6e0128463f4ffbd21-merged.mount: Deactivated successfully.
Oct 11 00:08:00 np0005480824 podman[307487]: 2025-10-11 04:08:00.316653644 +0000 UTC m=+1.211739827 container remove bc9cd26d3bd7eaab3ad455ab9e60a3aa8a3e8d40413aa8f853f004c8866c8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 00:08:00 np0005480824 systemd[1]: libpod-conmon-bc9cd26d3bd7eaab3ad455ab9e60a3aa8a3e8d40413aa8f853f004c8866c8cae.scope: Deactivated successfully.
Oct 11 00:08:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 00:08:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:08:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 00:08:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:08:00 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 08bbda49-e9e3-4c44-bbd0-8d5110206cf1 does not exist
Oct 11 00:08:00 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 8dc0e6fe-f77e-4956-a461-2df0b4aca48e does not exist
Oct 11 00:08:00 np0005480824 nova_compute[260089]: 2025-10-11 04:08:00.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:00 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:08:00 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:08:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:08:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 283 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.0 MiB/s wr, 42 op/s
Oct 11 00:08:01 np0005480824 ovn_controller[152667]: 2025-10-11T04:08:01Z|00076|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.14
Oct 11 00:08:01 np0005480824 ovn_controller[152667]: 2025-10-11T04:08:01Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:55:04:a0 10.100.0.14
Oct 11 00:08:02 np0005480824 ovn_controller[152667]: 2025-10-11T04:08:02Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:04:a0 10.100.0.14
Oct 11 00:08:02 np0005480824 ovn_controller[152667]: 2025-10-11T04:08:02Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:04:a0 10.100.0.14
Oct 11 00:08:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 287 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Oct 11 00:08:04 np0005480824 podman[307600]: 2025-10-11 04:08:04.987395892 +0000 UTC m=+0.050180175 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 00:08:05 np0005480824 nova_compute[260089]: 2025-10-11 04:08:05.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 287 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Oct 11 00:08:05 np0005480824 nova_compute[260089]: 2025-10-11 04:08:05.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:08:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 287 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Oct 11 00:08:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 287 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 936 KiB/s rd, 355 KiB/s wr, 18 op/s
Oct 11 00:08:10 np0005480824 nova_compute[260089]: 2025-10-11 04:08:10.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:10 np0005480824 nova_compute[260089]: 2025-10-11 04:08:10.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:10.512 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:08:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:10.513 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:08:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:10.513 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:08:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:08:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 287 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 415 KiB/s rd, 355 KiB/s wr, 8 op/s
Oct 11 00:08:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 295 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 845 KiB/s rd, 793 KiB/s wr, 12 op/s
Oct 11 00:08:15 np0005480824 nova_compute[260089]: 2025-10-11 04:08:15.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 295 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 438 KiB/s wr, 4 op/s
Oct 11 00:08:15 np0005480824 nova_compute[260089]: 2025-10-11 04:08:15.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:08:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 295 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 4 op/s
Oct 11 00:08:18 np0005480824 ovn_controller[152667]: 2025-10-11T04:08:18Z|00280|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 11 00:08:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 295 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 4 op/s
Oct 11 00:08:20 np0005480824 nova_compute[260089]: 2025-10-11 04:08:20.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:20 np0005480824 nova_compute[260089]: 2025-10-11 04:08:20.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:08:20 np0005480824 podman[307619]: 2025-10-11 04:08:20.992315512 +0000 UTC m=+0.053919151 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Oct 11 00:08:21 np0005480824 podman[307620]: 2025-10-11 04:08:21.006402489 +0000 UTC m=+0.062635203 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid)
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.061 2 DEBUG oslo_concurrency.lockutils [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.062 2 DEBUG oslo_concurrency.lockutils [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.063 2 DEBUG oslo_concurrency.lockutils [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.063 2 DEBUG oslo_concurrency.lockutils [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.063 2 DEBUG oslo_concurrency.lockutils [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.065 2 INFO nova.compute.manager [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Terminating instance#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.066 2 DEBUG nova.compute.manager [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 11 00:08:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 295 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 4 op/s
Oct 11 00:08:21 np0005480824 kernel: tap1cecff65-5d (unregistering): left promiscuous mode
Oct 11 00:08:21 np0005480824 NetworkManager[44969]: <info>  [1760155701.1451] device (tap1cecff65-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 00:08:21 np0005480824 ovn_controller[152667]: 2025-10-11T04:08:21Z|00281|binding|INFO|Releasing lport 1cecff65-5dca-4e92-9f18-a4729f87c434 from this chassis (sb_readonly=0)
Oct 11 00:08:21 np0005480824 ovn_controller[152667]: 2025-10-11T04:08:21Z|00282|binding|INFO|Setting lport 1cecff65-5dca-4e92-9f18-a4729f87c434 down in Southbound
Oct 11 00:08:21 np0005480824 ovn_controller[152667]: 2025-10-11T04:08:21Z|00283|binding|INFO|Removing iface tap1cecff65-5d ovn-installed in OVS
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:21.161 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:04:a0 10.100.0.14'], port_security=['fa:16:3e:55:04:a0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3f57226b-3e1a-4f83-9b96-9b5a7ff37910', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f367c6c5e8f479399a2004c82cfaff0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78484826-fa6d-47e8-af8c-1b198aee6eb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b37e59a3-7c4f-47c2-acd9-d3f9dd8c5f52, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>], logical_port=1cecff65-5dca-4e92-9f18-a4729f87c434) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f22d3ab6a00>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:08:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:21.162 162245 INFO neutron.agent.ovn.metadata.agent [-] Port 1cecff65-5dca-4e92-9f18-a4729f87c434 in datapath abadcf46-9a41-4911-85e0-fbcde2d48b79 unbound from our chassis#033[00m
Oct 11 00:08:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:21.163 162245 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abadcf46-9a41-4911-85e0-fbcde2d48b79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 11 00:08:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:21.164 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[651bd0a1-6687-4bba-a62d-c386c4f0bf40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:08:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:21.164 162245 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 namespace which is not needed anymore#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:21 np0005480824 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct 11 00:08:21 np0005480824 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Consumed 16.636s CPU time.
Oct 11 00:08:21 np0005480824 systemd-machined[215071]: Machine qemu-30-instance-0000001e terminated.
Oct 11 00:08:21 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[306582]: [NOTICE]   (306586) : haproxy version is 2.8.14-c23fe91
Oct 11 00:08:21 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[306582]: [NOTICE]   (306586) : path to executable is /usr/sbin/haproxy
Oct 11 00:08:21 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[306582]: [WARNING]  (306586) : Exiting Master process...
Oct 11 00:08:21 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[306582]: [ALERT]    (306586) : Current worker (306588) exited with code 143 (Terminated)
Oct 11 00:08:21 np0005480824 neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79[306582]: [WARNING]  (306586) : All workers exited. Exiting... (0)
Oct 11 00:08:21 np0005480824 systemd[1]: libpod-d829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513.scope: Deactivated successfully.
Oct 11 00:08:21 np0005480824 conmon[306582]: conmon d829d03ab28983488ced <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513.scope/container/memory.events
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.301 2 INFO nova.virt.libvirt.driver [-] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Instance destroyed successfully.#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.302 2 DEBUG nova.objects.instance [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lazy-loading 'resources' on Instance uuid 3f57226b-3e1a-4f83-9b96-9b5a7ff37910 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 11 00:08:21 np0005480824 podman[307684]: 2025-10-11 04:08:21.304101595 +0000 UTC m=+0.052002817 container died d829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.319 2 DEBUG nova.virt.libvirt.vif [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T04:07:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-2054399998',display_name='tempest-TestEncryptedCinderVolumes-server-2054399998',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-2054399998',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNT5nRmfgUQpJQihppMhJ/PJtl2PXt4LF+4fCTR7CvYlNKAHH53rCj1YReitA5DOkjFToqvFLFWF74Q9GO2rD7zoT+ufORFGj1sd+RhwvHNqWv6rQH+IM1H5SH+IwmGWpQ==',key_name='tempest-TestEncryptedCinderVolumes-517148477',keypairs=<?>,launch_index=0,launched_at=2025-10-11T04:07:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f367c6c5e8f479399a2004c82cfaff0',ramdisk_id='',reservation_id='r-ug5fzirc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-781713731',owner_user_name='tempest-TestEncryptedCinderVolumes-781713731-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T04:07:45Z,user_data=None,user_id='f9202e7d8882475ba6a769d9c59c35fd',uuid=3f57226b-3e1a-4f83-9b96-9b5a7ff37910,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1cecff65-5dca-4e92-9f18-a4729f87c434", "address": "fa:16:3e:55:04:a0", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cecff65-5d", "ovs_interfaceid": "1cecff65-5dca-4e92-9f18-a4729f87c434", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.319 2 DEBUG nova.network.os_vif_util [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converting VIF {"id": "1cecff65-5dca-4e92-9f18-a4729f87c434", "address": "fa:16:3e:55:04:a0", "network": {"id": "abadcf46-9a41-4911-85e0-fbcde2d48b79", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-654501219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f367c6c5e8f479399a2004c82cfaff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cecff65-5d", "ovs_interfaceid": "1cecff65-5dca-4e92-9f18-a4729f87c434", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.320 2 DEBUG nova.network.os_vif_util [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:55:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=1cecff65-5dca-4e92-9f18-a4729f87c434,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cecff65-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.320 2 DEBUG os_vif [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=1cecff65-5dca-4e92-9f18-a4729f87c434,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cecff65-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.322 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1cecff65-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.328 2 INFO os_vif [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=1cecff65-5dca-4e92-9f18-a4729f87c434,network=Network(abadcf46-9a41-4911-85e0-fbcde2d48b79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cecff65-5d')#033[00m
Oct 11 00:08:21 np0005480824 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513-userdata-shm.mount: Deactivated successfully.
Oct 11 00:08:21 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a5666eefa3b84cc149df7e147f6001c723e1e56260e5b49e7958fc95c78659db-merged.mount: Deactivated successfully.
Oct 11 00:08:21 np0005480824 podman[307684]: 2025-10-11 04:08:21.346021427 +0000 UTC m=+0.093922649 container cleanup d829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 00:08:21 np0005480824 systemd[1]: libpod-conmon-d829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513.scope: Deactivated successfully.
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.405 2 DEBUG nova.compute.manager [req-5751e50c-9e0a-4936-92d3-99f0310e1752 req-2556701e-a159-4a10-be16-29ad8d09ae4a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Received event network-vif-unplugged-1cecff65-5dca-4e92-9f18-a4729f87c434 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.405 2 DEBUG oslo_concurrency.lockutils [req-5751e50c-9e0a-4936-92d3-99f0310e1752 req-2556701e-a159-4a10-be16-29ad8d09ae4a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.406 2 DEBUG oslo_concurrency.lockutils [req-5751e50c-9e0a-4936-92d3-99f0310e1752 req-2556701e-a159-4a10-be16-29ad8d09ae4a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.406 2 DEBUG oslo_concurrency.lockutils [req-5751e50c-9e0a-4936-92d3-99f0310e1752 req-2556701e-a159-4a10-be16-29ad8d09ae4a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.407 2 DEBUG nova.compute.manager [req-5751e50c-9e0a-4936-92d3-99f0310e1752 req-2556701e-a159-4a10-be16-29ad8d09ae4a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] No waiting events found dispatching network-vif-unplugged-1cecff65-5dca-4e92-9f18-a4729f87c434 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.407 2 DEBUG nova.compute.manager [req-5751e50c-9e0a-4936-92d3-99f0310e1752 req-2556701e-a159-4a10-be16-29ad8d09ae4a 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Received event network-vif-unplugged-1cecff65-5dca-4e92-9f18-a4729f87c434 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 11 00:08:21 np0005480824 podman[307737]: 2025-10-11 04:08:21.415392116 +0000 UTC m=+0.046958410 container remove d829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 00:08:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:21.421 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[2b45d9f8-e9be-4891-9a6e-4fd6781ef931]: (4, ('Sat Oct 11 04:08:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 (d829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513)\nd829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513\nSat Oct 11 04:08:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 (d829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513)\nd829d03ab28983488ced206da705bf8a9cc88efbc2dfc4a5cfce244aab6eb513\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:08:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:21.423 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[8f629b07-cf31-49f1-b172-a69ceec39f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:08:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:21.424 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabadcf46-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:21 np0005480824 kernel: tapabadcf46-90: left promiscuous mode
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:21.442 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[23da1ebb-017e-4a16-be80-f74419cb86c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:08:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:21.469 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[e61a25f7-59ea-4d3f-bee0-04666f3f3e2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:08:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:21.470 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b2a7d3-7ccd-4084-b782-60eca0d893f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:08:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:21.485 267859 DEBUG oslo.privsep.daemon [-] privsep: reply[ffcbcc20-5d2c-46a3-a156-bebbaba9ecf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512916, 'reachable_time': 42814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307755, 'error': None, 'target': 'ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:08:21 np0005480824 systemd[1]: run-netns-ovnmeta\x2dabadcf46\x2d9a41\x2d4911\x2d85e0\x2dfbcde2d48b79.mount: Deactivated successfully.
Oct 11 00:08:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:21.489 162666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abadcf46-9a41-4911-85e0-fbcde2d48b79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 11 00:08:21 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:21.489 162666 DEBUG oslo.privsep.daemon [-] privsep: reply[4de7e29b-51f1-4248-94f0-1dbbf3209e18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.554 2 INFO nova.virt.libvirt.driver [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Deleting instance files /var/lib/nova/instances/3f57226b-3e1a-4f83-9b96-9b5a7ff37910_del#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.555 2 INFO nova.virt.libvirt.driver [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Deletion of /var/lib/nova/instances/3f57226b-3e1a-4f83-9b96-9b5a7ff37910_del complete#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.634 2 INFO nova.compute.manager [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.634 2 DEBUG oslo.service.loopingcall [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.635 2 DEBUG nova.compute.manager [-] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 11 00:08:21 np0005480824 nova_compute[260089]: 2025-10-11 04:08:21.635 2 DEBUG nova.network.neutron [-] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 11 00:08:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 295 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 649 KiB/s rd, 441 KiB/s wr, 23 op/s
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.218 2 DEBUG nova.network.neutron [-] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.236 2 INFO nova.compute.manager [-] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Took 1.60 seconds to deallocate network for instance.#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.413 2 INFO nova.compute.manager [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Took 0.18 seconds to detach 1 volumes for instance.#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.471 2 DEBUG oslo_concurrency.lockutils [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.471 2 DEBUG oslo_concurrency.lockutils [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.499 2 DEBUG nova.compute.manager [req-faafb404-5dfc-498e-8298-938ccd72169a req-656a93fb-3e64-4543-adde-58af9a71b684 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Received event network-vif-plugged-1cecff65-5dca-4e92-9f18-a4729f87c434 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.499 2 DEBUG oslo_concurrency.lockutils [req-faafb404-5dfc-498e-8298-938ccd72169a req-656a93fb-3e64-4543-adde-58af9a71b684 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Acquiring lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.499 2 DEBUG oslo_concurrency.lockutils [req-faafb404-5dfc-498e-8298-938ccd72169a req-656a93fb-3e64-4543-adde-58af9a71b684 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.499 2 DEBUG oslo_concurrency.lockutils [req-faafb404-5dfc-498e-8298-938ccd72169a req-656a93fb-3e64-4543-adde-58af9a71b684 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.500 2 DEBUG nova.compute.manager [req-faafb404-5dfc-498e-8298-938ccd72169a req-656a93fb-3e64-4543-adde-58af9a71b684 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] No waiting events found dispatching network-vif-plugged-1cecff65-5dca-4e92-9f18-a4729f87c434 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.500 2 WARNING nova.compute.manager [req-faafb404-5dfc-498e-8298-938ccd72169a req-656a93fb-3e64-4543-adde-58af9a71b684 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Received unexpected event network-vif-plugged-1cecff65-5dca-4e92-9f18-a4729f87c434 for instance with vm_state deleted and task_state None.#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.500 2 DEBUG nova.compute.manager [req-faafb404-5dfc-498e-8298-938ccd72169a req-656a93fb-3e64-4543-adde-58af9a71b684 286269366cc14e178a1545c37cb39497 138b87de781b4b829b248ab2d1714fea - - default default] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Received event network-vif-deleted-1cecff65-5dca-4e92-9f18-a4729f87c434 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.532 2 DEBUG oslo_concurrency.processutils [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:08:23 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:08:23 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/391513453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.955 2 DEBUG oslo_concurrency.processutils [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.961 2 DEBUG nova.compute.provider_tree [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.976 2 DEBUG nova.scheduler.client.report [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:08:23 np0005480824 nova_compute[260089]: 2025-10-11 04:08:23.993 2 DEBUG oslo_concurrency.lockutils [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:08:24 np0005480824 nova_compute[260089]: 2025-10-11 04:08:24.014 2 INFO nova.scheduler.client.report [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Deleted allocations for instance 3f57226b-3e1a-4f83-9b96-9b5a7ff37910#033[00m
Oct 11 00:08:24 np0005480824 nova_compute[260089]: 2025-10-11 04:08:24.095 2 DEBUG oslo_concurrency.lockutils [None req-e83648db-6fcb-49bb-b0b7-e5e10d459eb3 f9202e7d8882475ba6a769d9c59c35fd 6f367c6c5e8f479399a2004c82cfaff0 - - default default] Lock "3f57226b-3e1a-4f83-9b96-9b5a7ff37910" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:08:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:08:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2246104556' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:08:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:08:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2246104556' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:08:25 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:25.069 162245 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:30:f4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fe:89:7c:57:3f:71'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 11 00:08:25 np0005480824 nova_compute[260089]: 2025-10-11 04:08:25.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:25 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:25.070 162245 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 11 00:08:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 295 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 2.6 KiB/s wr, 18 op/s
Oct 11 00:08:25 np0005480824 nova_compute[260089]: 2025-10-11 04:08:25.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:08:26 np0005480824 nova_compute[260089]: 2025-10-11 04:08:26.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 295 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 230 KiB/s rd, 2.7 KiB/s wr, 30 op/s
Oct 11 00:08:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:08:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:08:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:08:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:08:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:08:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:08:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_04:08:27
Oct 11 00:08:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 00:08:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 11 00:08:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.control', '.mgr', 'images', 'backups', 'default.rgw.meta', 'volumes', '.rgw.root']
Oct 11 00:08:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 11 00:08:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:08:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/631816692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:08:28 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:08:28 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/631816692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:08:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 00:08:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:08:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 00:08:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:08:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:08:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:08:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:08:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:08:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:08:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:08:29 np0005480824 podman[307779]: 2025-10-11 04:08:29.023063546 +0000 UTC m=+0.076953926 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 00:08:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 291 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 230 KiB/s rd, 682 B/s wr, 31 op/s
Oct 11 00:08:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:08:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1210139570' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:08:29 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:08:29 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1210139570' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:08:30 np0005480824 nova_compute[260089]: 2025-10-11 04:08:30.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:08:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 291 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 230 KiB/s rd, 682 B/s wr, 31 op/s
Oct 11 00:08:31 np0005480824 nova_compute[260089]: 2025-10-11 04:08:31.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:32 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:08:32.072 162245 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=14b06507-d00b-4e27-a47d-46a5c2644635, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 11 00:08:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 1.5 KiB/s wr, 45 op/s
Oct 11 00:08:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 938 B/s wr, 26 op/s
Oct 11 00:08:35 np0005480824 nova_compute[260089]: 2025-10-11 04:08:35.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:08:36 np0005480824 podman[307806]: 2025-10-11 04:08:35.99965444 +0000 UTC m=+0.060164777 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 11 00:08:36 np0005480824 nova_compute[260089]: 2025-10-11 04:08:36.301 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760155701.2992456, 3f57226b-3e1a-4f83-9b96-9b5a7ff37910 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 11 00:08:36 np0005480824 nova_compute[260089]: 2025-10-11 04:08:36.302 2 INFO nova.compute.manager [-] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] VM Stopped (Lifecycle Event)#033[00m
Oct 11 00:08:36 np0005480824 nova_compute[260089]: 2025-10-11 04:08:36.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:36 np0005480824 nova_compute[260089]: 2025-10-11 04:08:36.333 2 DEBUG nova.compute.manager [None req-543d20f1-6595-42c4-bb8e-009894c9031f - - - - - -] [instance: 3f57226b-3e1a-4f83-9b96-9b5a7ff37910] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 11 00:08:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 938 B/s wr, 26 op/s
Oct 11 00:08:37 np0005480824 nova_compute[260089]: 2025-10-11 04:08:37.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:37 np0005480824 nova_compute[260089]: 2025-10-11 04:08:37.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:08:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 00:08:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 00:08:38 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 8657 writes, 39K keys, 8657 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8657 writes, 8657 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1692 writes, 7619 keys, 1692 commit groups, 1.0 writes per commit group, ingest: 10.27 MB, 0.02 MB/s#012Interval WAL: 1692 writes, 1692 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    124.3      0.37              0.16        22    0.017       0      0       0.0       0.0#012  L6      1/0   11.29 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.8    157.1    131.1      1.35              0.68        21    0.064    115K    12K       0.0       0.0#012 Sum      1/0   11.29 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.8    123.1    129.6      1.73              0.84        43    0.040    115K    12K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.5    134.6    137.5      0.45              0.19        10    0.045     36K   2635       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    157.1    131.1      1.35              0.68        21    0.064    115K    12K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    125.6      0.37              0.16        21    0.018       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.045, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.22 GB write, 0.07 MB/s write, 0.21 GB read, 0.07 MB/s read, 1.7 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5617dbc851f0#2 capacity: 304.00 MB usage: 23.96 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000244 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1665,22.99 MB,7.56189%) FilterBlock(44,337.67 KB,0.108473%) IndexBlock(44,653.95 KB,0.210074%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 11 00:08:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 852 B/s wr, 14 op/s
Oct 11 00:08:40 np0005480824 nova_compute[260089]: 2025-10-11 04:08:40.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:08:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 852 B/s wr, 14 op/s
Oct 11 00:08:41 np0005480824 nova_compute[260089]: 2025-10-11 04:08:41.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 852 B/s wr, 14 op/s
Oct 11 00:08:43 np0005480824 nova_compute[260089]: 2025-10-11 04:08:43.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:08:43 np0005480824 nova_compute[260089]: 2025-10-11 04:08:43.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:08:44 np0005480824 nova_compute[260089]: 2025-10-11 04:08:44.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:08:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:08:45 np0005480824 nova_compute[260089]: 2025-10-11 04:08:45.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:08:46 np0005480824 nova_compute[260089]: 2025-10-11 04:08:46.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:08:47 np0005480824 nova_compute[260089]: 2025-10-11 04:08:47.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:08:47 np0005480824 nova_compute[260089]: 2025-10-11 04:08:47.329 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:08:47 np0005480824 nova_compute[260089]: 2025-10-11 04:08:47.329 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:08:47 np0005480824 nova_compute[260089]: 2025-10-11 04:08:47.329 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:08:47 np0005480824 nova_compute[260089]: 2025-10-11 04:08:47.329 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 00:08:47 np0005480824 nova_compute[260089]: 2025-10-11 04:08:47.330 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:08:47 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:08:47 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1313439386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:08:47 np0005480824 nova_compute[260089]: 2025-10-11 04:08:47.749 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:08:47 np0005480824 nova_compute[260089]: 2025-10-11 04:08:47.931 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:08:47 np0005480824 nova_compute[260089]: 2025-10-11 04:08:47.933 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4299MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 00:08:47 np0005480824 nova_compute[260089]: 2025-10-11 04:08:47.933 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:08:47 np0005480824 nova_compute[260089]: 2025-10-11 04:08:47.933 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:08:48 np0005480824 nova_compute[260089]: 2025-10-11 04:08:48.003 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 00:08:48 np0005480824 nova_compute[260089]: 2025-10-11 04:08:48.004 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 00:08:48 np0005480824 nova_compute[260089]: 2025-10-11 04:08:48.026 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:08:48 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:08:48 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3536300882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:08:48 np0005480824 nova_compute[260089]: 2025-10-11 04:08:48.431 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:08:48 np0005480824 nova_compute[260089]: 2025-10-11 04:08:48.436 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:08:48 np0005480824 nova_compute[260089]: 2025-10-11 04:08:48.451 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:08:48 np0005480824 nova_compute[260089]: 2025-10-11 04:08:48.480 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 00:08:48 np0005480824 nova_compute[260089]: 2025-10-11 04:08:48.480 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:08:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:08:49 np0005480824 nova_compute[260089]: 2025-10-11 04:08:49.481 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:08:49 np0005480824 nova_compute[260089]: 2025-10-11 04:08:49.481 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 00:08:49 np0005480824 nova_compute[260089]: 2025-10-11 04:08:49.482 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 00:08:49 np0005480824 nova_compute[260089]: 2025-10-11 04:08:49.504 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 00:08:49 np0005480824 nova_compute[260089]: 2025-10-11 04:08:49.504 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:08:50 np0005480824 nova_compute[260089]: 2025-10-11 04:08:50.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:08:50 np0005480824 nova_compute[260089]: 2025-10-11 04:08:50.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:08:50 np0005480824 nova_compute[260089]: 2025-10-11 04:08:50.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 00:08:50 np0005480824 nova_compute[260089]: 2025-10-11 04:08:50.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:08:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:08:51 np0005480824 nova_compute[260089]: 2025-10-11 04:08:51.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:52 np0005480824 podman[307873]: 2025-10-11 04:08:52.003560365 +0000 UTC m=+0.058585640 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 00:08:52 np0005480824 podman[307872]: 2025-10-11 04:08:52.003653367 +0000 UTC m=+0.064117438 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 11 00:08:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:08:54 np0005480824 nova_compute[260089]: 2025-10-11 04:08:54.298 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:08:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:08:55 np0005480824 nova_compute[260089]: 2025-10-11 04:08:55.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:08:56 np0005480824 nova_compute[260089]: 2025-10-11 04:08:56.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:08:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:08:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:08:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:08:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:08:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:08:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:08:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:08:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:00 np0005480824 podman[307909]: 2025-10-11 04:09:00.030395482 +0000 UTC m=+0.092481116 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 00:09:00 np0005480824 nova_compute[260089]: 2025-10-11 04:09:00.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:09:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:09:01 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev d4c262b5-d8f9-4dab-a0a4-024f65026d49 does not exist
Oct 11 00:09:01 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 1c688dc1-9acd-4955-a7fd-88ddbb065463 does not exist
Oct 11 00:09:01 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 4335a3d8-cfbc-4ddf-8231-6cb2a421cb49 does not exist
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:09:01 np0005480824 nova_compute[260089]: 2025-10-11 04:09:01.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:09:01 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:09:02 np0005480824 podman[308206]: 2025-10-11 04:09:02.022183551 +0000 UTC m=+0.039614950 container create b1d74d7ce3c655483445735afbf47846da7f893deed1c086d2be0f198bca3b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:09:02 np0005480824 systemd[1]: Started libpod-conmon-b1d74d7ce3c655483445735afbf47846da7f893deed1c086d2be0f198bca3b70.scope.
Oct 11 00:09:02 np0005480824 podman[308206]: 2025-10-11 04:09:02.004127022 +0000 UTC m=+0.021558441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:09:02 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:09:02 np0005480824 podman[308206]: 2025-10-11 04:09:02.120922872 +0000 UTC m=+0.138354291 container init b1d74d7ce3c655483445735afbf47846da7f893deed1c086d2be0f198bca3b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 00:09:02 np0005480824 podman[308206]: 2025-10-11 04:09:02.129429869 +0000 UTC m=+0.146861268 container start b1d74d7ce3c655483445735afbf47846da7f893deed1c086d2be0f198bca3b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:09:02 np0005480824 podman[308206]: 2025-10-11 04:09:02.133958274 +0000 UTC m=+0.151389663 container attach b1d74d7ce3c655483445735afbf47846da7f893deed1c086d2be0f198bca3b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:09:02 np0005480824 compassionate_turing[308223]: 167 167
Oct 11 00:09:02 np0005480824 systemd[1]: libpod-b1d74d7ce3c655483445735afbf47846da7f893deed1c086d2be0f198bca3b70.scope: Deactivated successfully.
Oct 11 00:09:02 np0005480824 podman[308206]: 2025-10-11 04:09:02.140164598 +0000 UTC m=+0.157596007 container died b1d74d7ce3c655483445735afbf47846da7f893deed1c086d2be0f198bca3b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:09:02 np0005480824 systemd[1]: var-lib-containers-storage-overlay-dbfa0f0968cb587a7fceaf5b693f5bd00e80da959a71e63b2785f053a4a0f309-merged.mount: Deactivated successfully.
Oct 11 00:09:02 np0005480824 podman[308206]: 2025-10-11 04:09:02.182574742 +0000 UTC m=+0.200006161 container remove b1d74d7ce3c655483445735afbf47846da7f893deed1c086d2be0f198bca3b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:09:02 np0005480824 systemd[1]: libpod-conmon-b1d74d7ce3c655483445735afbf47846da7f893deed1c086d2be0f198bca3b70.scope: Deactivated successfully.
Oct 11 00:09:02 np0005480824 podman[308248]: 2025-10-11 04:09:02.346567785 +0000 UTC m=+0.039189090 container create 49d7cc9339a8088c4ea3746e4c7da6a3297859fa75c74e9726ae1246e4ff360c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 00:09:02 np0005480824 systemd[1]: Started libpod-conmon-49d7cc9339a8088c4ea3746e4c7da6a3297859fa75c74e9726ae1246e4ff360c.scope.
Oct 11 00:09:02 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:09:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a014f745c00fa1f923014256ac65e051622e06fd4b2f484e3cec6cb617cfd00c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:09:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a014f745c00fa1f923014256ac65e051622e06fd4b2f484e3cec6cb617cfd00c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:09:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a014f745c00fa1f923014256ac65e051622e06fd4b2f484e3cec6cb617cfd00c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:09:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a014f745c00fa1f923014256ac65e051622e06fd4b2f484e3cec6cb617cfd00c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:09:02 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a014f745c00fa1f923014256ac65e051622e06fd4b2f484e3cec6cb617cfd00c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 00:09:02 np0005480824 podman[308248]: 2025-10-11 04:09:02.331868025 +0000 UTC m=+0.024489350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:09:02 np0005480824 podman[308248]: 2025-10-11 04:09:02.438465807 +0000 UTC m=+0.131087132 container init 49d7cc9339a8088c4ea3746e4c7da6a3297859fa75c74e9726ae1246e4ff360c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 00:09:02 np0005480824 podman[308248]: 2025-10-11 04:09:02.444989708 +0000 UTC m=+0.137611013 container start 49d7cc9339a8088c4ea3746e4c7da6a3297859fa75c74e9726ae1246e4ff360c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 00:09:02 np0005480824 podman[308248]: 2025-10-11 04:09:02.44894715 +0000 UTC m=+0.141568475 container attach 49d7cc9339a8088c4ea3746e4c7da6a3297859fa75c74e9726ae1246e4ff360c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_haslett, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:09:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:03 np0005480824 angry_haslett[308264]: --> passed data devices: 0 physical, 3 LVM
Oct 11 00:09:03 np0005480824 angry_haslett[308264]: --> relative data size: 1.0
Oct 11 00:09:03 np0005480824 angry_haslett[308264]: --> All data devices are unavailable
Oct 11 00:09:03 np0005480824 systemd[1]: libpod-49d7cc9339a8088c4ea3746e4c7da6a3297859fa75c74e9726ae1246e4ff360c.scope: Deactivated successfully.
Oct 11 00:09:03 np0005480824 systemd[1]: libpod-49d7cc9339a8088c4ea3746e4c7da6a3297859fa75c74e9726ae1246e4ff360c.scope: Consumed 1.068s CPU time.
Oct 11 00:09:03 np0005480824 podman[308293]: 2025-10-11 04:09:03.602555688 +0000 UTC m=+0.027337454 container died 49d7cc9339a8088c4ea3746e4c7da6a3297859fa75c74e9726ae1246e4ff360c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 00:09:03 np0005480824 systemd[1]: var-lib-containers-storage-overlay-a014f745c00fa1f923014256ac65e051622e06fd4b2f484e3cec6cb617cfd00c-merged.mount: Deactivated successfully.
Oct 11 00:09:03 np0005480824 podman[308293]: 2025-10-11 04:09:03.657229617 +0000 UTC m=+0.082011313 container remove 49d7cc9339a8088c4ea3746e4c7da6a3297859fa75c74e9726ae1246e4ff360c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_haslett, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 00:09:03 np0005480824 systemd[1]: libpod-conmon-49d7cc9339a8088c4ea3746e4c7da6a3297859fa75c74e9726ae1246e4ff360c.scope: Deactivated successfully.
Oct 11 00:09:04 np0005480824 podman[308450]: 2025-10-11 04:09:04.208876372 +0000 UTC m=+0.043580441 container create d84d92e24ee7240f2ce5fe232028d1a84d9d924aefb92622369342140c68c9f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:09:04 np0005480824 systemd[1]: Started libpod-conmon-d84d92e24ee7240f2ce5fe232028d1a84d9d924aefb92622369342140c68c9f6.scope.
Oct 11 00:09:04 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:09:04 np0005480824 podman[308450]: 2025-10-11 04:09:04.273791028 +0000 UTC m=+0.108495117 container init d84d92e24ee7240f2ce5fe232028d1a84d9d924aefb92622369342140c68c9f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 00:09:04 np0005480824 podman[308450]: 2025-10-11 04:09:04.282007788 +0000 UTC m=+0.116711857 container start d84d92e24ee7240f2ce5fe232028d1a84d9d924aefb92622369342140c68c9f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 00:09:04 np0005480824 podman[308450]: 2025-10-11 04:09:04.192007981 +0000 UTC m=+0.026712070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:09:04 np0005480824 cranky_spence[308467]: 167 167
Oct 11 00:09:04 np0005480824 systemd[1]: libpod-d84d92e24ee7240f2ce5fe232028d1a84d9d924aefb92622369342140c68c9f6.scope: Deactivated successfully.
Oct 11 00:09:04 np0005480824 podman[308450]: 2025-10-11 04:09:04.28852363 +0000 UTC m=+0.123227709 container attach d84d92e24ee7240f2ce5fe232028d1a84d9d924aefb92622369342140c68c9f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 00:09:04 np0005480824 podman[308450]: 2025-10-11 04:09:04.289019871 +0000 UTC m=+0.123723940 container died d84d92e24ee7240f2ce5fe232028d1a84d9d924aefb92622369342140c68c9f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 00:09:04 np0005480824 systemd[1]: var-lib-containers-storage-overlay-c5ddb0d166fb38235a1abe5b55ff72623d767743fee01a9c4a1a800f7fda746d-merged.mount: Deactivated successfully.
Oct 11 00:09:04 np0005480824 podman[308450]: 2025-10-11 04:09:04.323936451 +0000 UTC m=+0.158640520 container remove d84d92e24ee7240f2ce5fe232028d1a84d9d924aefb92622369342140c68c9f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_spence, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:09:04 np0005480824 systemd[1]: libpod-conmon-d84d92e24ee7240f2ce5fe232028d1a84d9d924aefb92622369342140c68c9f6.scope: Deactivated successfully.
Oct 11 00:09:04 np0005480824 podman[308492]: 2025-10-11 04:09:04.499570245 +0000 UTC m=+0.051494195 container create 45c2785398046d1e42b9971a4c6d94bab2c34a2d1971ab220191c9a02c49501a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 00:09:04 np0005480824 systemd[1]: Started libpod-conmon-45c2785398046d1e42b9971a4c6d94bab2c34a2d1971ab220191c9a02c49501a.scope.
Oct 11 00:09:04 np0005480824 podman[308492]: 2025-10-11 04:09:04.474713769 +0000 UTC m=+0.026637789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:09:04 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:09:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7574d47ba14c9e5e36a79e6598968239410a1de24262bf01211b8cf9270133/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:09:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7574d47ba14c9e5e36a79e6598968239410a1de24262bf01211b8cf9270133/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:09:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7574d47ba14c9e5e36a79e6598968239410a1de24262bf01211b8cf9270133/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:09:04 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7574d47ba14c9e5e36a79e6598968239410a1de24262bf01211b8cf9270133/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:09:04 np0005480824 podman[308492]: 2025-10-11 04:09:04.60626192 +0000 UTC m=+0.158185870 container init 45c2785398046d1e42b9971a4c6d94bab2c34a2d1971ab220191c9a02c49501a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:09:04 np0005480824 podman[308492]: 2025-10-11 04:09:04.616760913 +0000 UTC m=+0.168684863 container start 45c2785398046d1e42b9971a4c6d94bab2c34a2d1971ab220191c9a02c49501a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 00:09:04 np0005480824 podman[308492]: 2025-10-11 04:09:04.619582468 +0000 UTC m=+0.171506428 container attach 45c2785398046d1e42b9971a4c6d94bab2c34a2d1971ab220191c9a02c49501a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 00:09:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]: {
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:    "0": [
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:        {
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "devices": [
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "/dev/loop3"
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            ],
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_name": "ceph_lv0",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_size": "21470642176",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0d82ce-20ea-470d-959e-f67202028a60,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "name": "ceph_lv0",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "tags": {
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.block_uuid": "TkjgGD-B7rJ-DOqO-7Nrb-56OY-GrPK-LhMT7w",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.cluster_name": "ceph",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.crush_device_class": "",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.encrypted": "0",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.osd_fsid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.osd_id": "0",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.type": "block",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.vdo": "0"
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            },
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "type": "block",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "vg_name": "ceph_vg0"
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:        }
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:    ],
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:    "1": [
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:        {
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "devices": [
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "/dev/loop4"
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            ],
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_name": "ceph_lv1",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_size": "21470642176",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6875119e-c210-4ad1-aca9-6a8084a5ecc8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "name": "ceph_lv1",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "tags": {
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.block_uuid": "kdDBXa-HGt0-oIJ2-8SFk-0yLW-XlpX-TisLKy",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.cluster_name": "ceph",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.crush_device_class": "",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.encrypted": "0",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.osd_fsid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.osd_id": "1",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.type": "block",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.vdo": "0"
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            },
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "type": "block",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "vg_name": "ceph_vg1"
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:        }
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:    ],
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:    "2": [
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:        {
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "devices": [
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "/dev/loop5"
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            ],
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_name": "ceph_lv2",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_size": "21470642176",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=92cfe4d4-4917-5be1-9d00-73758793a62b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e86945e8-6909-4584-9098-cee0dfe9add4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "lv_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "name": "ceph_lv2",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "tags": {
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.block_uuid": "GduJyp-MYzk-OG0E-1pmc-DFS8-D4MG-C6qmeN",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.cephx_lockbox_secret": "",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.cluster_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.cluster_name": "ceph",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.crush_device_class": "",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.encrypted": "0",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.osd_fsid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.osd_id": "2",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.type": "block",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:                "ceph.vdo": "0"
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            },
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "type": "block",
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:            "vg_name": "ceph_vg2"
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:        }
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]:    ]
Oct 11 00:09:05 np0005480824 dazzling_hertz[308509]: }
Oct 11 00:09:05 np0005480824 systemd[1]: libpod-45c2785398046d1e42b9971a4c6d94bab2c34a2d1971ab220191c9a02c49501a.scope: Deactivated successfully.
Oct 11 00:09:05 np0005480824 podman[308492]: 2025-10-11 04:09:05.434693786 +0000 UTC m=+0.986617766 container died 45c2785398046d1e42b9971a4c6d94bab2c34a2d1971ab220191c9a02c49501a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:09:05 np0005480824 systemd[1]: var-lib-containers-storage-overlay-6c7574d47ba14c9e5e36a79e6598968239410a1de24262bf01211b8cf9270133-merged.mount: Deactivated successfully.
Oct 11 00:09:05 np0005480824 podman[308492]: 2025-10-11 04:09:05.491927503 +0000 UTC m=+1.043851443 container remove 45c2785398046d1e42b9971a4c6d94bab2c34a2d1971ab220191c9a02c49501a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 00:09:05 np0005480824 systemd[1]: libpod-conmon-45c2785398046d1e42b9971a4c6d94bab2c34a2d1971ab220191c9a02c49501a.scope: Deactivated successfully.
Oct 11 00:09:05 np0005480824 nova_compute[260089]: 2025-10-11 04:09:05.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:09:06 np0005480824 podman[308669]: 2025-10-11 04:09:06.172319285 +0000 UTC m=+0.067385744 container create 652f309307d04a3e18bd3731dd3b67f5cdd65bc79263f975bf2243a96a6c5c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 11 00:09:06 np0005480824 systemd[1]: Started libpod-conmon-652f309307d04a3e18bd3731dd3b67f5cdd65bc79263f975bf2243a96a6c5c77.scope.
Oct 11 00:09:06 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:09:06 np0005480824 podman[308669]: 2025-10-11 04:09:06.146563738 +0000 UTC m=+0.041630267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:09:06 np0005480824 podman[308669]: 2025-10-11 04:09:06.249744571 +0000 UTC m=+0.144811040 container init 652f309307d04a3e18bd3731dd3b67f5cdd65bc79263f975bf2243a96a6c5c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:09:06 np0005480824 podman[308669]: 2025-10-11 04:09:06.255945765 +0000 UTC m=+0.151012204 container start 652f309307d04a3e18bd3731dd3b67f5cdd65bc79263f975bf2243a96a6c5c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 00:09:06 np0005480824 podman[308669]: 2025-10-11 04:09:06.259469687 +0000 UTC m=+0.154536126 container attach 652f309307d04a3e18bd3731dd3b67f5cdd65bc79263f975bf2243a96a6c5c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 00:09:06 np0005480824 blissful_wozniak[308686]: 167 167
Oct 11 00:09:06 np0005480824 systemd[1]: libpod-652f309307d04a3e18bd3731dd3b67f5cdd65bc79263f975bf2243a96a6c5c77.scope: Deactivated successfully.
Oct 11 00:09:06 np0005480824 podman[308669]: 2025-10-11 04:09:06.262003655 +0000 UTC m=+0.157070134 container died 652f309307d04a3e18bd3731dd3b67f5cdd65bc79263f975bf2243a96a6c5c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct 11 00:09:06 np0005480824 podman[308685]: 2025-10-11 04:09:06.282383458 +0000 UTC m=+0.064603899 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, managed_by=edpm_ansible)
Oct 11 00:09:06 np0005480824 systemd[1]: var-lib-containers-storage-overlay-0f23a99f499e7a4a5757a3ee716a14759ed3baa3fd547b50bfe634d8869b7eff-merged.mount: Deactivated successfully.
Oct 11 00:09:06 np0005480824 podman[308669]: 2025-10-11 04:09:06.308203587 +0000 UTC m=+0.203270026 container remove 652f309307d04a3e18bd3731dd3b67f5cdd65bc79263f975bf2243a96a6c5c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 00:09:06 np0005480824 systemd[1]: libpod-conmon-652f309307d04a3e18bd3731dd3b67f5cdd65bc79263f975bf2243a96a6c5c77.scope: Deactivated successfully.
Oct 11 00:09:06 np0005480824 nova_compute[260089]: 2025-10-11 04:09:06.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:06 np0005480824 podman[308729]: 2025-10-11 04:09:06.468628388 +0000 UTC m=+0.045508876 container create 087a763848a237e28d5d79cb93c6499a33dc04cccbca187a5172bcc4fa5a6f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 00:09:06 np0005480824 systemd[1]: Started libpod-conmon-087a763848a237e28d5d79cb93c6499a33dc04cccbca187a5172bcc4fa5a6f32.scope.
Oct 11 00:09:06 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:09:06 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617a48f5c60402b35d2359d7c941be1fc31b0ea85dcc9c6c35ebca32003b0744/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:09:06 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617a48f5c60402b35d2359d7c941be1fc31b0ea85dcc9c6c35ebca32003b0744/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:09:06 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617a48f5c60402b35d2359d7c941be1fc31b0ea85dcc9c6c35ebca32003b0744/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:09:06 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617a48f5c60402b35d2359d7c941be1fc31b0ea85dcc9c6c35ebca32003b0744/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:09:06 np0005480824 podman[308729]: 2025-10-11 04:09:06.451731806 +0000 UTC m=+0.028612304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:09:06 np0005480824 podman[308729]: 2025-10-11 04:09:06.548423599 +0000 UTC m=+0.125304097 container init 087a763848a237e28d5d79cb93c6499a33dc04cccbca187a5172bcc4fa5a6f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 00:09:06 np0005480824 podman[308729]: 2025-10-11 04:09:06.557334635 +0000 UTC m=+0.134215113 container start 087a763848a237e28d5d79cb93c6499a33dc04cccbca187a5172bcc4fa5a6f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 00:09:06 np0005480824 podman[308729]: 2025-10-11 04:09:06.561008211 +0000 UTC m=+0.137888739 container attach 087a763848a237e28d5d79cb93c6499a33dc04cccbca187a5172bcc4fa5a6f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 11 00:09:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:07 np0005480824 fervent_turing[308746]: {
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:    "1d0d82ce-20ea-470d-959e-f67202028a60": {
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "osd_id": 0,
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "osd_uuid": "1d0d82ce-20ea-470d-959e-f67202028a60",
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "type": "bluestore"
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:    },
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:    "6875119e-c210-4ad1-aca9-6a8084a5ecc8": {
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "osd_id": 1,
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "osd_uuid": "6875119e-c210-4ad1-aca9-6a8084a5ecc8",
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "type": "bluestore"
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:    },
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:    "e86945e8-6909-4584-9098-cee0dfe9add4": {
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "ceph_fsid": "92cfe4d4-4917-5be1-9d00-73758793a62b",
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "osd_id": 2,
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "osd_uuid": "e86945e8-6909-4584-9098-cee0dfe9add4",
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:        "type": "bluestore"
Oct 11 00:09:07 np0005480824 fervent_turing[308746]:    }
Oct 11 00:09:07 np0005480824 fervent_turing[308746]: }
Oct 11 00:09:07 np0005480824 systemd[1]: libpod-087a763848a237e28d5d79cb93c6499a33dc04cccbca187a5172bcc4fa5a6f32.scope: Deactivated successfully.
Oct 11 00:09:07 np0005480824 podman[308729]: 2025-10-11 04:09:07.621701274 +0000 UTC m=+1.198581772 container died 087a763848a237e28d5d79cb93c6499a33dc04cccbca187a5172bcc4fa5a6f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:09:07 np0005480824 systemd[1]: libpod-087a763848a237e28d5d79cb93c6499a33dc04cccbca187a5172bcc4fa5a6f32.scope: Consumed 1.068s CPU time.
Oct 11 00:09:07 np0005480824 systemd[1]: var-lib-containers-storage-overlay-617a48f5c60402b35d2359d7c941be1fc31b0ea85dcc9c6c35ebca32003b0744-merged.mount: Deactivated successfully.
Oct 11 00:09:07 np0005480824 podman[308729]: 2025-10-11 04:09:07.69352442 +0000 UTC m=+1.270404898 container remove 087a763848a237e28d5d79cb93c6499a33dc04cccbca187a5172bcc4fa5a6f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 00:09:07 np0005480824 systemd[1]: libpod-conmon-087a763848a237e28d5d79cb93c6499a33dc04cccbca187a5172bcc4fa5a6f32.scope: Deactivated successfully.
Oct 11 00:09:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 00:09:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:09:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 00:09:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:09:07 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev d2f134a3-8f81-4810-a06f-2059f4c37db4 does not exist
Oct 11 00:09:07 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 7e70aed3-be8b-468e-87d9-94d17273cc2a does not exist
Oct 11 00:09:08 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:09:08 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:09:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:09:10.513 162245 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:09:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:09:10.514 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:09:10 np0005480824 ovn_metadata_agent[162240]: 2025-10-11 04:09:10.514 162245 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:09:10 np0005480824 nova_compute[260089]: 2025-10-11 04:09:10.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:10 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:09:11 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:11 np0005480824 nova_compute[260089]: 2025-10-11 04:09:11.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:12 np0005480824 ovn_controller[152667]: 2025-10-11T04:09:12Z|00284|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Oct 11 00:09:13 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:15 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:15 np0005480824 nova_compute[260089]: 2025-10-11 04:09:15.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:15 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:09:16 np0005480824 nova_compute[260089]: 2025-10-11 04:09:16.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:17 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:19 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:20 np0005480824 nova_compute[260089]: 2025-10-11 04:09:20.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:20 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:09:21 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:21 np0005480824 nova_compute[260089]: 2025-10-11 04:09:21.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:23 np0005480824 podman[308844]: 2025-10-11 04:09:23.041221994 +0000 UTC m=+0.088339100 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 11 00:09:23 np0005480824 podman[308843]: 2025-10-11 04:09:23.050652413 +0000 UTC m=+0.099870218 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 00:09:23 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 00:09:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/843664105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 00:09:24 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 00:09:24 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/843664105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 00:09:25 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:25 np0005480824 nova_compute[260089]: 2025-10-11 04:09:25.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:25 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:09:26 np0005480824 nova_compute[260089]: 2025-10-11 04:09:26.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:27 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:09:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:09:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:09:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:09:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:09:27 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:09:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Optimize plan auto_2025-10-11_04:09:27
Oct 11 00:09:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 00:09:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] do_upmap
Oct 11 00:09:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'images', 'vms', 'default.rgw.control', 'volumes', 'backups']
Oct 11 00:09:27 np0005480824 ceph-mgr[74617]: [balancer INFO root] prepared 0/10 changes
Oct 11 00:09:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 00:09:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:09:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 00:09:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 00:09:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:09:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:09:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 00:09:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:09:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 00:09:28 np0005480824 ceph-mgr[74617]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 00:09:29 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:30 np0005480824 nova_compute[260089]: 2025-10-11 04:09:30.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:30 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:09:31 np0005480824 podman[308880]: 2025-10-11 04:09:31.059372198 +0000 UTC m=+0.105381405 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 00:09:31 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:31 np0005480824 nova_compute[260089]: 2025-10-11 04:09:31.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:33 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:35 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:35 np0005480824 nova_compute[260089]: 2025-10-11 04:09:35.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:35 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:09:36 np0005480824 nova_compute[260089]: 2025-10-11 04:09:36.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:37 np0005480824 podman[308906]: 2025-10-11 04:09:37.036002977 +0000 UTC m=+0.087306247 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 00:09:37 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 00:09:38 np0005480824 ceph-mgr[74617]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 00:09:39 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:40 np0005480824 nova_compute[260089]: 2025-10-11 04:09:40.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:40 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:09:41 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:41 np0005480824 nova_compute[260089]: 2025-10-11 04:09:41.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:43 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:43 np0005480824 nova_compute[260089]: 2025-10-11 04:09:43.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:09:45 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:45 np0005480824 nova_compute[260089]: 2025-10-11 04:09:45.292 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:09:45 np0005480824 nova_compute[260089]: 2025-10-11 04:09:45.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:09:45 np0005480824 nova_compute[260089]: 2025-10-11 04:09:45.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:45 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:09:46 np0005480824 nova_compute[260089]: 2025-10-11 04:09:46.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:47 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:47 np0005480824 systemd-logind[782]: New session 53 of user zuul.
Oct 11 00:09:47 np0005480824 systemd[1]: Started Session 53 of User zuul.
Oct 11 00:09:49 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.311 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.312 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.346 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.346 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.346 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.347 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.347 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:09:49 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:09:49 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2026977834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.784 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.937 2 WARNING nova.virt.libvirt.driver [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.938 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4266MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.938 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 11 00:09:49 np0005480824 nova_compute[260089]: 2025-10-11 04:09:49.938 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 11 00:09:50 np0005480824 nova_compute[260089]: 2025-10-11 04:09:50.009 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 11 00:09:50 np0005480824 nova_compute[260089]: 2025-10-11 04:09:50.010 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 11 00:09:50 np0005480824 nova_compute[260089]: 2025-10-11 04:09:50.031 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 11 00:09:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 00:09:50 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2175943584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 00:09:50 np0005480824 nova_compute[260089]: 2025-10-11 04:09:50.441 2 DEBUG oslo_concurrency.processutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 11 00:09:50 np0005480824 nova_compute[260089]: 2025-10-11 04:09:50.447 2 DEBUG nova.compute.provider_tree [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 11 00:09:50 np0005480824 nova_compute[260089]: 2025-10-11 04:09:50.474 2 DEBUG nova.scheduler.client.report [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Inventory has not changed for provider 6d73c5a7-f660-44d1-8cfb-ba2d16a9dc72 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 11 00:09:50 np0005480824 nova_compute[260089]: 2025-10-11 04:09:50.477 2 DEBUG nova.compute.resource_tracker [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 11 00:09:50 np0005480824 nova_compute[260089]: 2025-10-11 04:09:50.478 2 DEBUG oslo_concurrency.lockutils [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 11 00:09:50 np0005480824 nova_compute[260089]: 2025-10-11 04:09:50.479 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:09:50 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19189 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:09:50 np0005480824 nova_compute[260089]: 2025-10-11 04:09:50.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:50 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:09:51 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19191 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:09:51 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:51 np0005480824 nova_compute[260089]: 2025-10-11 04:09:51.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:51 np0005480824 nova_compute[260089]: 2025-10-11 04:09:51.479 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:09:51 np0005480824 nova_compute[260089]: 2025-10-11 04:09:51.480 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:09:51 np0005480824 nova_compute[260089]: 2025-10-11 04:09:51.480 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 11 00:09:51 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 11 00:09:51 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2328031693' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 00:09:52 np0005480824 nova_compute[260089]: 2025-10-11 04:09:52.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:09:53 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:54 np0005480824 podman[309255]: 2025-10-11 04:09:54.043366369 +0000 UTC m=+0.090848419 container health_status f2b19cad22d0fbdd185b264c3c8b6443d09a02319dc9fb0585dc69a0f24a758d (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:09:54 np0005480824 podman[309254]: 2025-10-11 04:09:54.050473533 +0000 UTC m=+0.097952253 container health_status 8b003d65c8e439e280409825aa37dacfb921ffdd0ada54278b9746654fdc0aa8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f)
Oct 11 00:09:55 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:55 np0005480824 nova_compute[260089]: 2025-10-11 04:09:55.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:09:55 np0005480824 nova_compute[260089]: 2025-10-11 04:09:55.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:55 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:09:56 np0005480824 nova_compute[260089]: 2025-10-11 04:09:56.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:09:57 np0005480824 ovs-vsctl[309342]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 11 00:09:57 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:57 np0005480824 nova_compute[260089]: 2025-10-11 04:09:57.297 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:09:57 np0005480824 nova_compute[260089]: 2025-10-11 04:09:57.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 11 00:09:57 np0005480824 nova_compute[260089]: 2025-10-11 04:09:57.470 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 11 00:09:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:09:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:09:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:09:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:09:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 00:09:57 np0005480824 ceph-mgr[74617]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 00:09:58 np0005480824 virtqemud[259861]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 11 00:09:58 np0005480824 virtqemud[259861]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 11 00:09:58 np0005480824 virtqemud[259861]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 11 00:09:58 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb asok_command: cache status {prefix=cache status} (starting...)
Oct 11 00:09:59 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb asok_command: client ls {prefix=client ls} (starting...)
Oct 11 00:09:59 np0005480824 lvm[309674]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 00:09:59 np0005480824 lvm[309674]: VG ceph_vg0 finished
Oct 11 00:09:59 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:09:59 np0005480824 lvm[309701]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 00:09:59 np0005480824 lvm[309701]: VG ceph_vg1 finished
Oct 11 00:09:59 np0005480824 lvm[309723]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 00:09:59 np0005480824 lvm[309723]: VG ceph_vg2 finished
Oct 11 00:09:59 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19195 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:09:59 np0005480824 nova_compute[260089]: 2025-10-11 04:09:59.465 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:09:59 np0005480824 kernel: block dm-2: the capability attribute has been deprecated.
Oct 11 00:09:59 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb asok_command: damage ls {prefix=damage ls} (starting...)
Oct 11 00:09:59 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19197 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:09:59 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb asok_command: dump loads {prefix=dump loads} (starting...)
Oct 11 00:09:59 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 11 00:10:00 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 11 00:10:00 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 11 00:10:00 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/748150578' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 11 00:10:00 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 11 00:10:00 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19203 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:10:00 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T04:10:00.598+0000 7fb7c0b48640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 00:10:00 np0005480824 ceph-mgr[74617]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 00:10:00 np0005480824 nova_compute[260089]: 2025-10-11 04:10:00.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2674845122' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:10:00 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.829851) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155800829917, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1414, "num_deletes": 250, "total_data_size": 2179684, "memory_usage": 2216248, "flush_reason": "Manual Compaction"}
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155800836898, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 1267293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39055, "largest_seqno": 40468, "table_properties": {"data_size": 1262364, "index_size": 2261, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13101, "raw_average_key_size": 20, "raw_value_size": 1251515, "raw_average_value_size": 1983, "num_data_blocks": 103, "num_entries": 631, "num_filter_entries": 631, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760155654, "oldest_key_time": 1760155654, "file_creation_time": 1760155800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 7080 microseconds, and 3684 cpu microseconds.
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.836939) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 1267293 bytes OK
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.836956) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.838491) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.838504) EVENT_LOG_v1 {"time_micros": 1760155800838500, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.838526) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2173424, prev total WAL file size 2173424, number of live WAL files 2.
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.839279) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323531' seq:72057594037927935, type:22 .. '6D6772737461740031353032' seq:0, type:0; will stop at (end)
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1237KB)], [80(11MB)]
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155800839326, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 13108619, "oldest_snapshot_seqno": -1}
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 7283 keys, 10687685 bytes, temperature: kUnknown
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155800906266, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 10687685, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10634710, "index_size": 33663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 183017, "raw_average_key_size": 25, "raw_value_size": 10499788, "raw_average_value_size": 1441, "num_data_blocks": 1344, "num_entries": 7283, "num_filter_entries": 7283, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760152715, "oldest_key_time": 0, "file_creation_time": 1760155800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bc2c00b6-74ab-4bd1-957a-6c6a75fb61ca", "db_session_id": "RJ9TM4FJNNQ2AWDFT4YB", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.906569) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 10687685 bytes
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.907933) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.5 rd, 159.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.3 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(18.8) write-amplify(8.4) OK, records in: 7728, records dropped: 445 output_compression: NoCompression
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.907954) EVENT_LOG_v1 {"time_micros": 1760155800907943, "job": 46, "event": "compaction_finished", "compaction_time_micros": 67049, "compaction_time_cpu_micros": 26901, "output_level": 6, "num_output_files": 1, "total_output_size": 10687685, "num_input_records": 7728, "num_output_records": 7283, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155800908275, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760155800910120, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.839173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.910158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.910164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.910166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.910168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:10:00 np0005480824 ceph-mon[74326]: rocksdb: (Original Log Time 2025/10/11-04:10:00.910170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 00:10:00 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb asok_command: ops {prefix=ops} (starting...)
Oct 11 00:10:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 11 00:10:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/117775274' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 11 00:10:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 11 00:10:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3451699944' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 11 00:10:01 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:10:01 np0005480824 nova_compute[260089]: 2025-10-11 04:10:01.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:10:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 11 00:10:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521816608' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 00:10:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 11 00:10:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3895280742' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 11 00:10:01 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb asok_command: session ls {prefix=session ls} (starting...)
Oct 11 00:10:01 np0005480824 ceph-mds[101067]: mds.cephfs.compute-0.uxaxgb asok_command: status {prefix=status} (starting...)
Oct 11 00:10:01 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 11 00:10:01 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/434679147' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 00:10:02 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19217 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:10:02 np0005480824 podman[310074]: 2025-10-11 04:10:02.055962271 +0000 UTC m=+0.106073260 container health_status 65c3a3d72e1cba3c83fc771a841564f690b47cc0f5012ce0acf16e2d9f8e3fe2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, tcib_managed=true)
Oct 11 00:10:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 11 00:10:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3428678660' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 00:10:02 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19221 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:10:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 00:10:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659935582' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 00:10:02 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 11 00:10:02 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2955392106' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 11 00:10:03 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:10:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 11 00:10:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/504051034' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 00:10:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 11 00:10:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2388243706' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 11 00:10:03 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 11 00:10:03 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3051735850' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 00:10:03 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19233 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:10:03 np0005480824 ceph-mgr[74617]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 11 00:10:03 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T04:10:03.622+0000 7fb7c0b48640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 11 00:10:03 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19235 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:10:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 11 00:10:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2675043141' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 11 00:10:04 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19239 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:10:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 11 00:10:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2107975176' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 11 00:10:04 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19243 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:10:04 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 11 00:10:04 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1873138042' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 00:10:05 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19247 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:10:05 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb0a3000/0x0/0x4ffc00000, data 0x8ef217/0x9da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 11010048 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 158 ms_handle_reset con 0x5607b31a9000 session 0x5607b248c1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048857 data_alloc: 218103808 data_used: 339968
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 10993664 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 10993664 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.319562912s of 10.639280319s, submitted: 103
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 158 ms_handle_reset con 0x5607b31a9c00 session 0x5607b3314d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 158 ms_handle_reset con 0x5607b3fa3400 session 0x5607b4a783c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb0a2000/0x0/0x4ffc00000, data 0x8ef279/0x9db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 10969088 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 10969088 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 10944512 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 159 ms_handle_reset con 0x5607b3fa3c00 session 0x5607b422e3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053997 data_alloc: 218103808 data_used: 348160
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 10936320 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 159 ms_handle_reset con 0x5607b1cbc000 session 0x5607b422ef00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 159 ms_handle_reset con 0x5607b31a9000 session 0x5607b2d72b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 11182080 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb09f000/0x0/0x4ffc00000, data 0x8f0df6/0x9de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 11182080 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 159 ms_handle_reset con 0x5607b31a9c00 session 0x5607b3314f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 11182080 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb09f000/0x0/0x4ffc00000, data 0x8f0df6/0x9de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 11182080 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb09f000/0x0/0x4ffc00000, data 0x8f0df6/0x9de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053428 data_alloc: 218103808 data_used: 348160
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 11182080 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 159 ms_handle_reset con 0x5607b3fa3400 session 0x5607b3314d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb09e000/0x0/0x4ffc00000, data 0x8f0e06/0x9df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 11182080 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb09e000/0x0/0x4ffc00000, data 0x8f0e06/0x9df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 11182080 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.057180405s of 11.213319778s, submitted: 48
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 160 ms_handle_reset con 0x5607b3296000 session 0x5607b248c1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 160 ms_handle_reset con 0x5607b1cbc000 session 0x5607b214d680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 11182080 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 160 ms_handle_reset con 0x5607b31a9000 session 0x5607b3f1da40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 11182080 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 161 ms_handle_reset con 0x5607b31a9c00 session 0x5607b4ac0f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061681 data_alloc: 218103808 data_used: 348160
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 11165696 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 11165696 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 161 ms_handle_reset con 0x5607b3296000 session 0x5607b24ca1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb097000/0x0/0x4ffc00000, data 0x8f45a6/0x9e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 11165696 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 11157504 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb096000/0x0/0x4ffc00000, data 0x8f5fa7/0x9e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 11157504 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063430 data_alloc: 218103808 data_used: 356352
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 11157504 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 11157504 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 11157504 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 ms_handle_reset con 0x5607b3fa3400 session 0x5607b5b15c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 11157504 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 ms_handle_reset con 0x5607b1cbc000 session 0x5607b4acfc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.373348236s of 11.474268913s, submitted: 32
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb095000/0x0/0x4ffc00000, data 0x8f5fb7/0x9e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 ms_handle_reset con 0x5607b31a9000 session 0x5607b3f42780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 11173888 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063610 data_alloc: 218103808 data_used: 356352
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 11173888 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 11173888 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 11173888 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 11173888 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 11173888 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 ms_handle_reset con 0x5607b31a9c00 session 0x5607b48e12c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065806 data_alloc: 218103808 data_used: 356352
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb096000/0x0/0x4ffc00000, data 0x8f5fa7/0x9e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 11108352 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 11108352 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 ms_handle_reset con 0x5607b3296000 session 0x5607b248c960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 ms_handle_reset con 0x5607b401cc00 session 0x5607b423cf00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 11100160 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 11100160 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb097000/0x0/0x4ffc00000, data 0x8f5fa7/0x9e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 11100160 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064364 data_alloc: 218103808 data_used: 356352
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 11100160 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 11100160 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 11100160 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.739091873s of 13.827895164s, submitted: 26
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 ms_handle_reset con 0x5607b1cbc000 session 0x5607b5b14960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 ms_handle_reset con 0x5607b31a9000 session 0x5607b20a6780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb097000/0x0/0x4ffc00000, data 0x8f5fa7/0x9e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 11091968 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 11091968 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071063 data_alloc: 218103808 data_used: 360448
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 ms_handle_reset con 0x5607b3296000 session 0x5607b319fa40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 11083776 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 163 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 163 ms_handle_reset con 0x5607b31a9c00 session 0x5607b422ed20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fb095000/0x0/0x4ffc00000, data 0x8f6019/0x9e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 11067392 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 163 ms_handle_reset con 0x5607b453d800 session 0x5607b3eed680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 163 ms_handle_reset con 0x5607b1cbc000 session 0x5607b2448b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 163 ms_handle_reset con 0x5607b31a9000 session 0x5607b44ae1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 163 ms_handle_reset con 0x5607b31a9c00 session 0x5607b4471e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fb091000/0x0/0x4ffc00000, data 0x8f7b96/0x9ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 10977280 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 164 ms_handle_reset con 0x5607b453d800 session 0x5607b319f860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 164 ms_handle_reset con 0x5607b3296000 session 0x5607b248d0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 164 ms_handle_reset con 0x5607b1cbc000 session 0x5607b3315680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 10911744 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 164 ms_handle_reset con 0x5607b31a9000 session 0x5607b48e0000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 164 ms_handle_reset con 0x5607b31a9c00 session 0x5607b4470000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 164 ms_handle_reset con 0x5607b453d800 session 0x5607b3f43680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 10813440 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 164 ms_handle_reset con 0x5607b401cc00 session 0x5607b20a7e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082788 data_alloc: 218103808 data_used: 368640
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 164 ms_handle_reset con 0x5607b1cbc000 session 0x5607b3284d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 10780672 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 10764288 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 165 ms_handle_reset con 0x5607b31a9000 session 0x5607b33150e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fb090000/0x0/0x4ffc00000, data 0x8f9705/0x9ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 10641408 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.963880539s of 10.413516045s, submitted: 157
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 10625024 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b31a9c00 session 0x5607b214d680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fb087000/0x0/0x4ffc00000, data 0x8fcd63/0x9f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 10641408 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fb087000/0x0/0x4ffc00000, data 0x8fcd63/0x9f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091448 data_alloc: 218103808 data_used: 376832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b453d800 session 0x5607b3315680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 10641408 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b47d6000 session 0x5607b3315a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 10633216 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fb087000/0x0/0x4ffc00000, data 0x8fcd63/0x9f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b1cbc000 session 0x5607b3314d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 10616832 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b31a9000 session 0x5607b20a7e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b31a9c00 session 0x5607b319f860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 10600448 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 10600448 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088431 data_alloc: 218103808 data_used: 376832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 10600448 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 10600448 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fb08a000/0x0/0x4ffc00000, data 0x8fcd53/0x9f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 10600448 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fb08a000/0x0/0x4ffc00000, data 0x8fcd53/0x9f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 10600448 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 10600448 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088431 data_alloc: 218103808 data_used: 376832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fb08a000/0x0/0x4ffc00000, data 0x8fcd53/0x9f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 10600448 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.630556107s of 12.735671043s, submitted: 34
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b453d800 session 0x5607b423d0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 10592256 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b4aee400 session 0x5607b423cd20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 10592256 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b1cbc000 session 0x5607b248cd20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b31a9000 session 0x5607b48e0000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 10584064 heap: 95903744 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b31a9c00 session 0x5607b5b14960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b453d800 session 0x5607b3f1c3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 15237120 heap: 101285888 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0xcf1dd5/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b4aee400 session 0x5607b3f5b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135773 data_alloc: 218103808 data_used: 376832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 15237120 heap: 101285888 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b1cbc000 session 0x5607b319e780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 15212544 heap: 101285888 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b31a9000 session 0x5607b4470780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 15605760 heap: 101285888 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b31a9c00 session 0x5607b248be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa882000/0x0/0x4ffc00000, data 0xcf1dd5/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b453d800 session 0x5607b44ab4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b4aee400 session 0x5607b4b2c960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b1cbc000 session 0x5607b44af860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b31a9000 session 0x5607b422eb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b31a9c00 session 0x5607b3eeda40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b453d800 session 0x5607b319f860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 25427968 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa0b4000/0x0/0x4ffc00000, data 0x14c0d63/0x15b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 25427968 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa0b4000/0x0/0x4ffc00000, data 0x14c0d63/0x15b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194390 data_alloc: 218103808 data_used: 380928
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 25419776 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa0b4000/0x0/0x4ffc00000, data 0x14c0d63/0x15b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 25354240 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 25354240 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 25354240 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b31a9800 session 0x5607b319e780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 25354240 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194390 data_alloc: 218103808 data_used: 380928
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b1cbc000 session 0x5607b3f1c3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 25354240 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b31a9000 session 0x5607b48e0000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.077644348s of 14.527206421s, submitted: 110
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b31a9c00 session 0x5607b248cd20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa0b4000/0x0/0x4ffc00000, data 0x14c0d63/0x15b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 86646784 unmapped: 25206784 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 24559616 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 93380608 unmapped: 18472960 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b453d800 session 0x5607b3284d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b31a8000 session 0x5607b248bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 93380608 unmapped: 18472960 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa091000/0x0/0x4ffc00000, data 0x14e4d63/0x15dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b31a8000 session 0x5607b4b1ab40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106430 data_alloc: 218103808 data_used: 380928
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 87810048 unmapped: 24043520 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 87810048 unmapped: 24043520 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 87810048 unmapped: 24043520 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fac79000/0x0/0x4ffc00000, data 0x8fcd63/0x9f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fac79000/0x0/0x4ffc00000, data 0x8fcd63/0x9f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 87810048 unmapped: 24043520 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fac79000/0x0/0x4ffc00000, data 0x8fcd63/0x9f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 ms_handle_reset con 0x5607b1cbc000 session 0x5607b4b1b2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 87810048 unmapped: 24043520 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107904 data_alloc: 218103808 data_used: 380928
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b31a9000 session 0x5607b4b1b4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 87654400 unmapped: 24199168 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 87654400 unmapped: 24199168 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.991695404s of 11.047292709s, submitted: 17
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b453d800 session 0x5607b248af00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b31a9c00 session 0x5607b4a78780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b31a8400 session 0x5607b4a78d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b1cbc000 session 0x5607b423c3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b31a8000 session 0x5607b5b154a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b31a9000 session 0x5607b5b141e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b453d800 session 0x5607b2449c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 87515136 unmapped: 24338432 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b1cbc000 session 0x5607b539be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b31a8000 session 0x5607b539ba40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b31a8400 session 0x5607b539b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b31a9000 session 0x5607b539ad20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b4166400 session 0x5607b423c000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa740000/0x0/0x4ffc00000, data 0xe328f0/0xf2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b1cbc000 session 0x5607b24cbe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b31a8400 session 0x5607b4b2d4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b31a9000 session 0x5607b4b2d860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 23658496 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 ms_handle_reset con 0x5607b54fe400 session 0x5607b4acfe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 88219648 unmapped: 23633920 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186625 data_alloc: 218103808 data_used: 397312
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fa436000/0x0/0x4ffc00000, data 0x113a475/0x1237000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 92463104 unmapped: 19390464 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b54fe800 session 0x5607b4aced20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 89448448 unmapped: 22405120 heap: 111853568 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b1cbc000 session 0x5607b248ba40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b31a8400 session 0x5607b4470d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 26157056 heap: 116056064 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b54fe800 session 0x5607b48e01e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b54fec00 session 0x5607b48e03c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 99188736 unmapped: 21069824 heap: 120258560 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 91455488 unmapped: 28803072 heap: 120258560 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 heartbeat osd_stat(store_statfs(0x4edc0d000/0x0/0x4ffc00000, data 0xd960094/0xda61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2844574 data_alloc: 218103808 data_used: 3129344
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 97828864 unmapped: 22429696 heap: 120258560 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b31a8000 session 0x5607b3284000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b1cbc000 session 0x5607b422fe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b54fe000 session 0x5607b3316f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b31a8000 session 0x5607b4ac12c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b31a9000 session 0x5607b4471680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b54fe400 session 0x5607b20bc5a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b31a8000 session 0x5607b4b1a000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 heartbeat osd_stat(store_statfs(0x4e940c000/0x0/0x4ffc00000, data 0x121600f6/0x12262000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,1,1,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 95084544 unmapped: 25174016 heap: 120258560 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.258764267s of 10.003879547s, submitted: 168
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b1cbc000 session 0x5607b4b2de00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b31a9000 session 0x5607b4acfc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 ms_handle_reset con 0x5607b54fe000 session 0x5607b32e4b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 93872128 unmapped: 26386432 heap: 120258560 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b54fe400 session 0x5607b32852c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b1cbc000 session 0x5607b3282f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 27082752 heap: 120258560 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b54ff000 session 0x5607b24492c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b54ff400 session 0x5607b2d73860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 27082752 heap: 120258560 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213986 data_alloc: 218103808 data_used: 413696
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b31a8000 session 0x5607b4b2cd20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 29376512 heap: 120258560 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b31a9000 session 0x5607b4b1ba40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 90292224 unmapped: 29966336 heap: 120258560 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 heartbeat osd_stat(store_statfs(0x4fac6c000/0x0/0x4ffc00000, data 0x903c4a/0xa02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b31a8000 session 0x5607b422f680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b31a9000 session 0x5607b422f0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 90308608 unmapped: 38346752 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 29949952 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b54ff000 session 0x5607b422e000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b54ff400 session 0x5607b4b1b2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 90300416 unmapped: 38354944 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b54fe000 session 0x5607b4b1a780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b54fe000 session 0x5607b4ac10e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1764882 data_alloc: 218103808 data_used: 409600
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b31a8000 session 0x5607b4b2c000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b31a9000 session 0x5607b4b2d860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 91283456 unmapped: 37371904 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 37437440 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b54ff000 session 0x5607b4acfe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.621793747s of 10.399069786s, submitted: 151
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 ms_handle_reset con 0x5607b54ff400 session 0x5607b32e4f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 99631104 unmapped: 29024256 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f4e8f000/0x0/0x4ffc00000, data 0x66ddccc/0x67df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 ms_handle_reset con 0x5607b54ff400 session 0x5607b319e780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 91250688 unmapped: 37404672 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 91283456 unmapped: 37371904 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 ms_handle_reset con 0x5607b31a8000 session 0x5607b3314b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2426337 data_alloc: 218103808 data_used: 417792
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 37330944 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 heartbeat osd_stat(store_statfs(0x4ef68c000/0x0/0x4ffc00000, data 0xbedf72f/0xbfe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,1,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 ms_handle_reset con 0x5607b31a9000 session 0x5607b423de00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 ms_handle_reset con 0x5607b54fe000 session 0x5607b3eed680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 ms_handle_reset con 0x5607b54ff000 session 0x5607b3282960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 ms_handle_reset con 0x5607b31a8000 session 0x5607b33170e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 ms_handle_reset con 0x5607b31a9000 session 0x5607b539ba40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 91529216 unmapped: 37126144 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 28721152 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 heartbeat osd_stat(store_statfs(0x4ebd2d000/0x0/0x4ffc00000, data 0xf83e72f/0xf941000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 37027840 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 36872192 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 ms_handle_reset con 0x5607b54fe000 session 0x5607b248bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374373 data_alloc: 218103808 data_used: 417792
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 28336128 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 ms_handle_reset con 0x5607b54ff400 session 0x5607b20a7e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 heartbeat osd_stat(store_statfs(0x4e752d000/0x0/0x4ffc00000, data 0x1403e72f/0x14141000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 ms_handle_reset con 0x5607b54fe400 session 0x5607b2d73c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 ms_handle_reset con 0x5607b31a8000 session 0x5607b44ae000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 36667392 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.732250214s of 10.551809311s, submitted: 69
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 100581376 unmapped: 28073984 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 36102144 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 93839360 unmapped: 34816000 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4165423 data_alloc: 234881024 data_used: 10170368
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 171 handle_osd_map epochs [171,172], i have 171, src has [1,172]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 30973952 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 172 ms_handle_reset con 0x5607b1cbc000 session 0x5607b539b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 172 heartbeat osd_stat(store_statfs(0x4e0d01000/0x0/0x4ffc00000, data 0x1a868762/0x1a96d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 30973952 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 30973952 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 30973952 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 30793728 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 173 ms_handle_reset con 0x5607b54ff400 session 0x5607b422eb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1493058 data_alloc: 234881024 data_used: 10178560
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 173 ms_handle_reset con 0x5607b31a8400 session 0x5607b248b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 173 ms_handle_reset con 0x5607b54fe800 session 0x5607b4b1a1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 173 heartbeat osd_stat(store_statfs(0x4f9cfc000/0x0/0x4ffc00000, data 0x186be90/0x1971000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 30777344 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 30777344 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 97878016 unmapped: 30777344 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.939133644s of 10.358599663s, submitted: 78
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 30744576 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 30744576 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1554996 data_alloc: 234881024 data_used: 10194944
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 174 heartbeat osd_stat(store_statfs(0x4f94be000/0x0/0x4ffc00000, data 0x20a98f3/0x21b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 105242624 unmapped: 23412736 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 174 heartbeat osd_stat(store_statfs(0x4f94be000/0x0/0x4ffc00000, data 0x20a98f3/0x21b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23052288 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23052288 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 174 ms_handle_reset con 0x5607b1cbc000 session 0x5607b4a79680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 175 ms_handle_reset con 0x5607b54ff400 session 0x5607b319f2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 104546304 unmapped: 24109056 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 176 ms_handle_reset con 0x5607b31a8400 session 0x5607b423cd20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 176 ms_handle_reset con 0x5607b31a8000 session 0x5607b3f43e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 104546304 unmapped: 24109056 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 176 heartbeat osd_stat(store_statfs(0x4f94aa000/0x0/0x4ffc00000, data 0x20ba4d2/0x21c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575937 data_alloc: 234881024 data_used: 10964992
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 104546304 unmapped: 24109056 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 176 ms_handle_reset con 0x5607b54fec00 session 0x5607b423c780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 104546304 unmapped: 24109056 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 176 heartbeat osd_stat(store_statfs(0x4f94a6000/0x0/0x4ffc00000, data 0x20bc04f/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 104546304 unmapped: 24109056 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 176 ms_handle_reset con 0x5607b31a8400 session 0x5607b4b1a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 176 ms_handle_reset con 0x5607b31a8000 session 0x5607b4ac1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.080654144s of 10.391241074s, submitted: 93
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 176 ms_handle_reset con 0x5607b54ff400 session 0x5607b3316000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 176 ms_handle_reset con 0x5607b54ff800 session 0x5607b3f1c3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 176 handle_osd_map epochs [176,177], i have 176, src has [1,177]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 177 ms_handle_reset con 0x5607b54ffc00 session 0x5607b248b4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 177 ms_handle_reset con 0x5607b54ffc00 session 0x5607b4a78960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 177 ms_handle_reset con 0x5607b1cbc000 session 0x5607b31c9e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103800832 unmapped: 24854528 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103800832 unmapped: 24854528 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 177 ms_handle_reset con 0x5607b31a8400 session 0x5607b20bc960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1582281 data_alloc: 234881024 data_used: 10973184
Oct 11 00:10:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 11 00:10:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2524959268' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 177 ms_handle_reset con 0x5607b31a8000 session 0x5607b20bda40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 24846336 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 178 ms_handle_reset con 0x5607b3fa2c00 session 0x5607b33145a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103202816 unmapped: 25452544 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 179 ms_handle_reset con 0x5607b5ea6800 session 0x5607b423c3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 25460736 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 180 ms_handle_reset con 0x5607b53ffc00 session 0x5607b422fe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 180 ms_handle_reset con 0x5607b5ea6c00 session 0x5607b4a78b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 180 heartbeat osd_stat(store_statfs(0x4f949e000/0x0/0x4ffc00000, data 0x20c139a/0x21d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 180 ms_handle_reset con 0x5607b1cbc000 session 0x5607b48e01e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103235584 unmapped: 25419776 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 180 ms_handle_reset con 0x5607b3fa2000 session 0x5607b248a000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 180 ms_handle_reset con 0x5607b1cbc000 session 0x5607b248ba40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 180 ms_handle_reset con 0x5607b53ffc00 session 0x5607b248be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 180 ms_handle_reset con 0x5607b5ea6800 session 0x5607b3314b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 180 ms_handle_reset con 0x5607b5ea6c00 session 0x5607b3314960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103235584 unmapped: 25419776 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 181 ms_handle_reset con 0x5607b3271000 session 0x5607b423cf00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1615447 data_alloc: 234881024 data_used: 11001856
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 181 heartbeat osd_stat(store_statfs(0x4f9198000/0x0/0x4ffc00000, data 0x23c4ada/0x24d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103243776 unmapped: 25411584 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 182 ms_handle_reset con 0x5607b1cbc000 session 0x5607b423d680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 182 ms_handle_reset con 0x5607b53ffc00 session 0x5607b423de00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103292928 unmapped: 25362432 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 182 ms_handle_reset con 0x5607b5ea6800 session 0x5607b3f1c780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 183 ms_handle_reset con 0x5607b3270800 session 0x5607b248d860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 25354240 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 184 ms_handle_reset con 0x5607b5ea6c00 session 0x5607b422f4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 184 ms_handle_reset con 0x5607b3270c00 session 0x5607b4ac1a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 184 ms_handle_reset con 0x5607b1cbc000 session 0x5607b32e5860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.675465584s of 10.073968887s, submitted: 104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 184 ms_handle_reset con 0x5607b3270800 session 0x5607b422e3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103374848 unmapped: 25280512 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 184 ms_handle_reset con 0x5607b5ea6800 session 0x5607b4b2c1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f918b000/0x0/0x4ffc00000, data 0x23ca51f/0x24e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103374848 unmapped: 25280512 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 184 handle_osd_map epochs [184,185], i have 184, src has [1,185]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 185 ms_handle_reset con 0x5607b3270400 session 0x5607b33163c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 185 ms_handle_reset con 0x5607b53ffc00 session 0x5607b24bb2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1635544 data_alloc: 234881024 data_used: 11005952
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 185 ms_handle_reset con 0x5607b1cbc000 session 0x5607b44af860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 185 ms_handle_reset con 0x5607b3270800 session 0x5607b248c960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103448576 unmapped: 25206784 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 186 ms_handle_reset con 0x5607b3270c00 session 0x5607b4a792c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 22724608 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 186 ms_handle_reset con 0x5607b53ff400 session 0x5607b3315a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 187 ms_handle_reset con 0x5607b1cbc000 session 0x5607b539a780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 187 ms_handle_reset con 0x5607b31a9000 session 0x5607b539ad20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 187 ms_handle_reset con 0x5607b54fe000 session 0x5607b5b141e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 22691840 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 188 ms_handle_reset con 0x5607b3270800 session 0x5607b48e1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 27623424 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 27623424 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1458567 data_alloc: 218103808 data_used: 3624960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa355000/0x0/0x4ffc00000, data 0x11fd047/0x1315000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 27623424 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 27623424 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 27623424 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 27623424 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.125025749s of 10.572602272s, submitted: 140
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 189 ms_handle_reset con 0x5607b3270c00 session 0x5607b539b0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 189 heartbeat osd_stat(store_statfs(0x4fa355000/0x0/0x4ffc00000, data 0x11fd047/0x1315000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 189 ms_handle_reset con 0x5607b3270c00 session 0x5607b539b680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 27607040 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412721 data_alloc: 218103808 data_used: 3620864
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 27598848 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 189 handle_osd_map epochs [189,190], i have 189, src has [1,190]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 189 handle_osd_map epochs [190,190], i have 190, src has [1,190]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 190 ms_handle_reset con 0x5607b1cbc000 session 0x5607b248c1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 27074560 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 190 ms_handle_reset con 0x5607b31a9000 session 0x5607b3f425a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 190 handle_osd_map epochs [190,191], i have 190, src has [1,191]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 191 ms_handle_reset con 0x5607b3270800 session 0x5607b3f5af00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101048320 unmapped: 27607040 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 191 heartbeat osd_stat(store_statfs(0x4fa688000/0x0/0x4ffc00000, data 0xec91de/0xfe3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 27648000 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 192 ms_handle_reset con 0x5607b54fe000 session 0x5607b423c5a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 27664384 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448151 data_alloc: 218103808 data_used: 3641344
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 192 ms_handle_reset con 0x5607b54fe000 session 0x5607b2d72960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 27656192 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 27656192 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101015552 unmapped: 27639808 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 ms_handle_reset con 0x5607b1cbc000 session 0x5607b44aad20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fa65d000/0x0/0x4ffc00000, data 0xef37da/0x1010000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 ms_handle_reset con 0x5607b31a9000 session 0x5607b4b2d680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 ms_handle_reset con 0x5607b3270800 session 0x5607b44aa3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 27582464 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101072896 unmapped: 27582464 heap: 128655360 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.925832748s of 11.384179115s, submitted: 138
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 ms_handle_reset con 0x5607b3270c00 session 0x5607b44ae780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1489475 data_alloc: 218103808 data_used: 3641344
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 ms_handle_reset con 0x5607b1cbc000 session 0x5607b4aceb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 ms_handle_reset con 0x5607b31a9000 session 0x5607b44aa5a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 ms_handle_reset con 0x5607b3270800 session 0x5607b422fe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101244928 unmapped: 31612928 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 ms_handle_reset con 0x5607b1ff2000 session 0x5607b422e960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 ms_handle_reset con 0x5607b4868400 session 0x5607b319f860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 ms_handle_reset con 0x5607b4868400 session 0x5607b4150000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101285888 unmapped: 31571968 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101285888 unmapped: 31571968 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 heartbeat osd_stat(store_statfs(0x4f9aac000/0x0/0x4ffc00000, data 0x1aa483c/0x1bc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 31539200 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 31539200 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 ms_handle_reset con 0x5607b1cbc000 session 0x5607b24ca000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1546235 data_alloc: 218103808 data_used: 3641344
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 ms_handle_reset con 0x5607b1ff2000 session 0x5607b5b152c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 31522816 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 101310464 unmapped: 31547392 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 194 ms_handle_reset con 0x5607b4868000 session 0x5607b20a74a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 102834176 unmapped: 30023680 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 195 ms_handle_reset con 0x5607b6008800 session 0x5607b422e960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 25247744 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 195 heartbeat osd_stat(store_statfs(0x4f9aa7000/0x0/0x4ffc00000, data 0x1aa641b/0x1bc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 195 ms_handle_reset con 0x5607b6008800 session 0x5607b4b1b2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 196 ms_handle_reset con 0x5607b1cbc000 session 0x5607b4b1b680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 196 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4b1a3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107659264 unmapped: 25198592 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f9aa4000/0x0/0x4ffc00000, data 0x1aa7f98/0x1bc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 196 handle_osd_map epochs [196,197], i have 196, src has [1,197]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.703008652s of 10.029575348s, submitted: 84
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 197 ms_handle_reset con 0x5607b4868000 session 0x5607b4b1a000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1638358 data_alloc: 234881024 data_used: 14774272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 197 ms_handle_reset con 0x5607b4868400 session 0x5607b4b2d680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 25165824 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 25141248 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 25141248 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 25141248 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 25124864 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1641132 data_alloc: 234881024 data_used: 14770176
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 25124864 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 198 heartbeat osd_stat(store_statfs(0x4f9a99000/0x0/0x4ffc00000, data 0x1aad271/0x1bd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 198 ms_handle_reset con 0x5607b1cbc000 session 0x5607b4b2c3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 198 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4b2c1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 25067520 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 108912640 unmapped: 23945216 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 200 ms_handle_reset con 0x5607b6009800 session 0x5607b20bd4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 19152896 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 201 ms_handle_reset con 0x5607b6008800 session 0x5607b20bda40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 18554880 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 201 heartbeat osd_stat(store_statfs(0x4f9384000/0x0/0x4ffc00000, data 0x21c08d1/0x22e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.692653656s of 10.092117310s, submitted: 116
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 202 ms_handle_reset con 0x5607b6008c00 session 0x5607b63f14a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1721615 data_alloc: 234881024 data_used: 15716352
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 202 ms_handle_reset con 0x5607b6009c00 session 0x5607b3f1c780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 19300352 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 202 heartbeat osd_stat(store_statfs(0x4f937a000/0x0/0x4ffc00000, data 0x21ca003/0x22f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 19292160 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 203 ms_handle_reset con 0x5607b1cbc000 session 0x5607b4a78b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 19275776 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 203 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3fa8f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 203 ms_handle_reset con 0x5607b4868000 session 0x5607b4b2d860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 113623040 unmapped: 19234816 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 203 ms_handle_reset con 0x5607b3270000 session 0x5607b44ab2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 203 ms_handle_reset con 0x5607b1ff2800 session 0x5607b4ace1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 19218432 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 203 heartbeat osd_stat(store_statfs(0x4f937c000/0x0/0x4ffc00000, data 0x1c00bf0/0x1d2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 203 ms_handle_reset con 0x5607b1ff2000 session 0x5607b423d860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 203 ms_handle_reset con 0x5607b1cbc000 session 0x5607b3fa8000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 203 handle_osd_map epochs [203,204], i have 203, src has [1,204]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1662317 data_alloc: 234881024 data_used: 12582912
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 204 ms_handle_reset con 0x5607b6009c00 session 0x5607b3eed860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 21143552 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 205 ms_handle_reset con 0x5607b6008800 session 0x5607b2d72b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 205 ms_handle_reset con 0x5607b6008800 session 0x5607b3f43e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 205 ms_handle_reset con 0x5607b4868000 session 0x5607b3f5bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 111722496 unmapped: 21135360 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 206 ms_handle_reset con 0x5607b1cbc000 session 0x5607b33154a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 21127168 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 207 ms_handle_reset con 0x5607b1ff2800 session 0x5607b32850e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 21069824 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 207 ms_handle_reset con 0x5607b1ff2000 session 0x5607b44ab4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 207 ms_handle_reset con 0x5607b6009c00 session 0x5607b5b154a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 207 ms_handle_reset con 0x5607b1cbc000 session 0x5607b2d73e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 208 ms_handle_reset con 0x5607b4aef400 session 0x5607b4b2d680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 112041984 unmapped: 20815872 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.434524536s of 10.003930092s, submitted: 198
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 208 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4b1a000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1675471 data_alloc: 234881024 data_used: 12595200
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 208 heartbeat osd_stat(store_statfs(0x4f9524000/0x0/0x4ffc00000, data 0x1c098a2/0x1d39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 208 handle_osd_map epochs [208,209], i have 208, src has [1,209]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 209 ms_handle_reset con 0x5607b1ff2800 session 0x5607b3fa8d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 209 ms_handle_reset con 0x5607b4868000 session 0x5607b539b680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 20824064 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 209 ms_handle_reset con 0x5607b1cbc000 session 0x5607b20bda40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 209 ms_handle_reset con 0x5607b1ff2000 session 0x5607b20bd860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 210 ms_handle_reset con 0x5607b1ff2800 session 0x5607b423d860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 112123904 unmapped: 20733952 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 20701184 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 210 ms_handle_reset con 0x5607b4aef400 session 0x5607b4471680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 211 ms_handle_reset con 0x5607b1cbc000 session 0x5607b4ef3860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 112181248 unmapped: 20676608 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 211 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4b2cf00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 112230400 unmapped: 20627456 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 211 heartbeat osd_stat(store_statfs(0x4f951b000/0x0/0x4ffc00000, data 0x1c0ec87/0x1d42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 211 ms_handle_reset con 0x5607b6009800 session 0x5607b3fa94a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1687197 data_alloc: 234881024 data_used: 12607488
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 211 ms_handle_reset con 0x5607b31a9000 session 0x5607b44ae780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 211 ms_handle_reset con 0x5607b3270800 session 0x5607b319e3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 111501312 unmapped: 21356544 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 211 ms_handle_reset con 0x5607b1cbc000 session 0x5607b3f72f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 211 ms_handle_reset con 0x5607b1ff2000 session 0x5607b44aa960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 211 ms_handle_reset con 0x5607b31a9000 session 0x5607b48e0b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 28860416 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 211 ms_handle_reset con 0x5607b3270800 session 0x5607b3f73860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 211 ms_handle_reset con 0x5607b6009800 session 0x5607b319f2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 28860416 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 28860416 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 212 ms_handle_reset con 0x5607b1ff2000 session 0x5607b319f860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 212 ms_handle_reset con 0x5607b31a9000 session 0x5607b3f432c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 212 handle_osd_map epochs [212,213], i have 212, src has [1,213]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 28827648 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.490103722s of 10.030726433s, submitted: 156
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 213 ms_handle_reset con 0x5607b1ff2800 session 0x5607b4ac0780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1560450 data_alloc: 218103808 data_used: 548864
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 213 handle_osd_map epochs [213,214], i have 213, src has [1,214]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 104103936 unmapped: 28753920 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 214 ms_handle_reset con 0x5607b3270800 session 0x5607b20a7c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 214 heartbeat osd_stat(store_statfs(0x4fa1ad000/0x0/0x4ffc00000, data 0xf7928d/0x10af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 214 ms_handle_reset con 0x5607b4868000 session 0x5607b44abc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 214 ms_handle_reset con 0x5607b1cbc000 session 0x5607b423c780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 214 ms_handle_reset con 0x5607b1ff2000 session 0x5607b248c1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 214 ms_handle_reset con 0x5607b1ff2800 session 0x5607b3f43680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 104128512 unmapped: 28729344 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 214 ms_handle_reset con 0x5607b31a9000 session 0x5607b2d72960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 214 ms_handle_reset con 0x5607b4868000 session 0x5607b422e000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 214 ms_handle_reset con 0x5607b3270800 session 0x5607b4a79c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 214 ms_handle_reset con 0x5607b4868000 session 0x5607b4b1af00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 104144896 unmapped: 28712960 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 104144896 unmapped: 28712960 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 214 handle_osd_map epochs [214,215], i have 214, src has [1,215]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 215 ms_handle_reset con 0x5607b1cbc000 session 0x5607b4a792c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 215 ms_handle_reset con 0x5607b6009c00 session 0x5607b3fa9c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 215 ms_handle_reset con 0x5607b1ff2800 session 0x5607b48e0b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 215 ms_handle_reset con 0x5607b31a9000 session 0x5607b4470780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 27459584 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 215 ms_handle_reset con 0x5607b1ff2800 session 0x5607b4151c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 215 ms_handle_reset con 0x5607b1cbc000 session 0x5607b4ace960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 215 heartbeat osd_stat(store_statfs(0x4f9a98000/0x0/0x4ffc00000, data 0x168df6c/0x17c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 216 ms_handle_reset con 0x5607b3270800 session 0x5607b4150000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1631624 data_alloc: 218103808 data_used: 573440
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 216 ms_handle_reset con 0x5607b4868000 session 0x5607b48e1860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 216 ms_handle_reset con 0x5607b1ff2000 session 0x5607b24ca1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 216 ms_handle_reset con 0x5607b1cbc000 session 0x5607b48e0f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 29024256 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 216 ms_handle_reset con 0x5607b31a9000 session 0x5607b4a790e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 216 ms_handle_reset con 0x5607b1ff2800 session 0x5607b48e1e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 217 ms_handle_reset con 0x5607b3270800 session 0x5607b4b2cb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 29048832 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 217 ms_handle_reset con 0x5607b1ff2000 session 0x5607b32e45a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 217 ms_handle_reset con 0x5607b1ff2800 session 0x5607b4b2c960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 218 ms_handle_reset con 0x5607b1cbc000 session 0x5607b24ba3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 218 ms_handle_reset con 0x5607b31a9000 session 0x5607b3f5a780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 218 ms_handle_reset con 0x5607b4868000 session 0x5607b24bbc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 29196288 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 44K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 3006 syncs, 3.54 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4991 writes, 20K keys, 4991 commit groups, 1.0 writes per commit group, ingest: 10.91 MB, 0.02 MB/s#012Interval WAL: 4991 writes, 2137 syncs, 2.34 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103677952 unmapped: 29179904 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 218 heartbeat osd_stat(store_statfs(0x4f9a08000/0x0/0x4ffc00000, data 0x1694e18/0x17d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 219 ms_handle_reset con 0x5607b1cbc000 session 0x5607b4ef3860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 29147136 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566503 data_alloc: 218103808 data_used: 581632
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.546621323s of 10.216635704s, submitted: 158
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 220 ms_handle_reset con 0x5607b31a9000 session 0x5607b24baf00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 220 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4ef25a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 29130752 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 220 ms_handle_reset con 0x5607b6009c00 session 0x5607b423de00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 220 ms_handle_reset con 0x5607b1ff2800 session 0x5607b539bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 221 ms_handle_reset con 0x5607b1cbc000 session 0x5607b3f5a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103751680 unmapped: 29106176 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 221 ms_handle_reset con 0x5607b1ff2000 session 0x5607b48e03c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 221 ms_handle_reset con 0x5607b31a9000 session 0x5607b44ab4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 29089792 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 221 ms_handle_reset con 0x5607b6009c00 session 0x5607b44aa1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 29089792 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 221 ms_handle_reset con 0x5607b6008800 session 0x5607b44af0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 221 heartbeat osd_stat(store_statfs(0x4fa5b2000/0x0/0x4ffc00000, data 0xb6c219/0xcac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 221 ms_handle_reset con 0x5607b1ff2000 session 0x5607b44aa960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 103784448 unmapped: 29073408 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 221 ms_handle_reset con 0x5607b31a9000 session 0x5607b63f0b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: mgrc ms_handle_reset ms_handle_reset con 0x5607b1cbc800
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3841581780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3841581780,v1:192.168.122.100:6801/3841581780]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532993 data_alloc: 218103808 data_used: 593920
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: mgrc handle_mgr_configure stats_period=5
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 222 ms_handle_reset con 0x5607b6009c00 session 0x5607b248c960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 105029632 unmapped: 27828224 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 222 handle_osd_map epochs [222,223], i have 222, src has [1,223]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 223 ms_handle_reset con 0x5607b4aee000 session 0x5607b4470780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 223 ms_handle_reset con 0x5607b60b4000 session 0x5607b63f10e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 223 ms_handle_reset con 0x5607b1cbc000 session 0x5607b319ef00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 223 ms_handle_reset con 0x5607b1ff2000 session 0x5607b423d4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 27811840 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 27811840 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 223 heartbeat osd_stat(store_statfs(0x4fa5a9000/0x0/0x4ffc00000, data 0xb700ea/0xcb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 223 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 105054208 unmapped: 27803648 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 224 ms_handle_reset con 0x5607b31a9000 session 0x5607b4b1b0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 224 ms_handle_reset con 0x5607b4aee000 session 0x5607b44ab2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 224 ms_handle_reset con 0x5607b60b4400 session 0x5607b3fa8000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 224 ms_handle_reset con 0x5607b6009c00 session 0x5607b3f42000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 224 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4471860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 106119168 unmapped: 26738688 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1544979 data_alloc: 218103808 data_used: 610304
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 224 handle_osd_map epochs [224,225], i have 224, src has [1,225]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.839684486s of 10.281339645s, submitted: 117
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 225 ms_handle_reset con 0x5607b31a9000 session 0x5607b4b1ad20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 225 ms_handle_reset con 0x5607b4aee000 session 0x5607b5b141e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 225 ms_handle_reset con 0x5607b1cbc000 session 0x5607b539a3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 106110976 unmapped: 26746880 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 106110976 unmapped: 26746880 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 225 heartbeat osd_stat(store_statfs(0x4fa5a3000/0x0/0x4ffc00000, data 0xb7375a/0xcba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 225 handle_osd_map epochs [226,227], i have 225, src has [1,227]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 225 handle_osd_map epochs [226,226], i have 227, src has [1,226]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 26697728 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 227 handle_osd_map epochs [227,228], i have 227, src has [1,228]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 26705920 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 228 ms_handle_reset con 0x5607b1ff2000 session 0x5607b5b14f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 228 ms_handle_reset con 0x5607b31a9000 session 0x5607b4b2d4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 26705920 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1555626 data_alloc: 218103808 data_used: 622592
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 229 ms_handle_reset con 0x5607b4aee000 session 0x5607b4b2cb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 26714112 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 229 ms_handle_reset con 0x5607b6009c00 session 0x5607b4b2cf00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 229 ms_handle_reset con 0x5607b60b4800 session 0x5607b3285a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 229 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3285860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 26714112 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 229 ms_handle_reset con 0x5607b31a9000 session 0x5607b3284960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 229 heartbeat osd_stat(store_statfs(0x4fa598000/0x0/0x4ffc00000, data 0xb7a007/0xcc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 230 ms_handle_reset con 0x5607b4aee000 session 0x5607b3284780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 26689536 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 26689536 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 230 heartbeat osd_stat(store_statfs(0x4fa592000/0x0/0x4ffc00000, data 0xb7bc7e/0xcca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 230 ms_handle_reset con 0x5607b6009c00 session 0x5607b63f05a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 230 ms_handle_reset con 0x5607b60b5000 session 0x5607b4a792c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 231 ms_handle_reset con 0x5607b60b4c00 session 0x5607b44ae960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 231 ms_handle_reset con 0x5607b60b5000 session 0x5607b4a790e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 26673152 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 231 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3316960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1619074 data_alloc: 218103808 data_used: 647168
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 231 ms_handle_reset con 0x5607b31a9000 session 0x5607b4ace5a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 231 ms_handle_reset con 0x5607b4aee000 session 0x5607b4a78b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.328117371s of 10.032282829s, submitted: 157
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 231 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4acfe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 25018368 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 231 ms_handle_reset con 0x5607b31a9000 session 0x5607b63f1a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 25018368 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 25010176 heap: 132857856 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 ms_handle_reset con 0x5607b60b4c00 session 0x5607b3fa9680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 ms_handle_reset con 0x5607b6009c00 session 0x5607b4a781e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 ms_handle_reset con 0x5607b60b5000 session 0x5607b4ac1a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 ms_handle_reset con 0x5607b60b5400 session 0x5607b4aceb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 ms_handle_reset con 0x5607b1ff2000 session 0x5607b24bbe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 ms_handle_reset con 0x5607b31a9000 session 0x5607b4470b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 24903680 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 ms_handle_reset con 0x5607b6009c00 session 0x5607b422ed20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 ms_handle_reset con 0x5607b60b4c00 session 0x5607b4b1a1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 ms_handle_reset con 0x5607b1ff2000 session 0x5607b32e5680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 ms_handle_reset con 0x5607b31a9000 session 0x5607b5b14000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 ms_handle_reset con 0x5607b6009c00 session 0x5607b31c9c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 heartbeat osd_stat(store_statfs(0x4f9d8a000/0x0/0x4ffc00000, data 0x13852a0/0x14d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29097984 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1645864 data_alloc: 218103808 data_used: 647168
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29097984 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 ms_handle_reset con 0x5607b60b5400 session 0x5607b248b4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 29097984 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 heartbeat osd_stat(store_statfs(0x4f9d8a000/0x0/0x4ffc00000, data 0x13852a0/0x14d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 232 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 233 ms_handle_reset con 0x5607b60b5800 session 0x5607b539b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 233 ms_handle_reset con 0x5607b1ff2000 session 0x5607b248d0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 233 ms_handle_reset con 0x5607b31a9000 session 0x5607b44ab0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 109019136 unmapped: 28041216 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 233 ms_handle_reset con 0x5607b6009c00 session 0x5607b33163c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 109019136 unmapped: 28041216 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 109019136 unmapped: 28041216 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1653961 data_alloc: 218103808 data_used: 647168
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 109748224 unmapped: 27312128 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f9d86000/0x0/0x4ffc00000, data 0x1386e5c/0x14d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 110403584 unmapped: 26656768 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 234 handle_osd_map epochs [234,235], i have 234, src has [1,235]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.735495567s of 11.950993538s, submitted: 63
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 110403584 unmapped: 26656768 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 110403584 unmapped: 26656768 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 110403584 unmapped: 26656768 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 235 ms_handle_reset con 0x5607b60b6c00 session 0x5607b3315860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1717081 data_alloc: 218103808 data_used: 8876032
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 235 heartbeat osd_stat(store_statfs(0x4f9d7f000/0x0/0x4ffc00000, data 0x138a4bc/0x14dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 110403584 unmapped: 26656768 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 110403584 unmapped: 26656768 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 26648576 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 heartbeat osd_stat(store_statfs(0x4f9d7f000/0x0/0x4ffc00000, data 0x138a4bc/0x14dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 heartbeat osd_stat(store_statfs(0x4f9d7d000/0x0/0x4ffc00000, data 0x138bf1f/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 26648576 heap: 137060352 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 ms_handle_reset con 0x5607b60b7000 session 0x5607b3314000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3314d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 ms_handle_reset con 0x5607b31a9000 session 0x5607b3314b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 ms_handle_reset con 0x5607b6009c00 session 0x5607b24ba3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 ms_handle_reset con 0x5607b60b6c00 session 0x5607b24bbc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 ms_handle_reset con 0x5607b60b7400 session 0x5607b48e1860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 ms_handle_reset con 0x5607b1ff2000 session 0x5607b48e0f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 ms_handle_reset con 0x5607b31a9000 session 0x5607b48e1e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 ms_handle_reset con 0x5607b6009c00 session 0x5607b24ca1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 110469120 unmapped: 30793728 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1789797 data_alloc: 218103808 data_used: 8888320
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 28778496 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 ms_handle_reset con 0x5607b60b6c00 session 0x5607b4150000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 26075136 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.757063866s of 10.082080841s, submitted: 88
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 27852800 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 ms_handle_reset con 0x5607b24b8400 session 0x5607b24bb4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 22896640 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 heartbeat osd_stat(store_statfs(0x4f8cab000/0x0/0x4ffc00000, data 0x245df42/0x25b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 heartbeat osd_stat(store_statfs(0x4f8cab000/0x0/0x4ffc00000, data 0x245df42/0x25b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 122077184 unmapped: 19185664 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 ms_handle_reset con 0x5607b60b7800 session 0x5607b3fa9c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 ms_handle_reset con 0x5607b60b7c00 session 0x5607b63f03c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1923718 data_alloc: 234881024 data_used: 19476480
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 ms_handle_reset con 0x5607b1ff2000 session 0x5607b44ab2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 24403968 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 heartbeat osd_stat(store_statfs(0x4f96cf000/0x0/0x4ffc00000, data 0x1a39f1f/0x1b8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 24403968 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 236 handle_osd_map epochs [236,237], i have 236, src has [1,237]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 237 ms_handle_reset con 0x5607b31a9000 session 0x5607b63f10e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24395776 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 237 ms_handle_reset con 0x5607b6009c00 session 0x5607b248d0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 237 ms_handle_reset con 0x5607b1ff2000 session 0x5607b248c960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 237 ms_handle_reset con 0x5607b31a9000 session 0x5607b5b141e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 237 ms_handle_reset con 0x5607b60b7800 session 0x5607b4acfe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 237 ms_handle_reset con 0x5607b60b7c00 session 0x5607b4ace5a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 24387584 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 24387584 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1818925 data_alloc: 234881024 data_used: 9515008
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 237 ms_handle_reset con 0x5607b3302400 session 0x5607b4ef23c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 24379392 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 238 ms_handle_reset con 0x5607b539fc00 session 0x5607b3284d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 238 ms_handle_reset con 0x5607b1ff2000 session 0x5607b20bd4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 238 ms_handle_reset con 0x5607b60b7800 session 0x5607b4ef2b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 238 ms_handle_reset con 0x5607b31a9000 session 0x5607b3316f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 239 ms_handle_reset con 0x5607b4aeec00 session 0x5607b20a65a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 24354816 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 239 heartbeat osd_stat(store_statfs(0x4f91ba000/0x0/0x4ffc00000, data 0x1f4d66d/0x20a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 239 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3283a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 239 handle_osd_map epochs [239,240], i have 239, src has [1,240]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.010469437s of 10.242606163s, submitted: 55
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 24346624 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 240 ms_handle_reset con 0x5607b60b7800 session 0x5607b48e0000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 240 heartbeat osd_stat(store_statfs(0x4f91b2000/0x0/0x4ffc00000, data 0x1f50d9f/0x20aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 241 ms_handle_reset con 0x5607b4aef800 session 0x5607b32e5680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24313856 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 241 ms_handle_reset con 0x5607b60b7c00 session 0x5607b4ef21e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 241 ms_handle_reset con 0x5607b539fc00 session 0x5607b44ae000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24313856 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1863305 data_alloc: 234881024 data_used: 10371072
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 241 heartbeat osd_stat(store_statfs(0x4f91b1000/0x0/0x4ffc00000, data 0x1f52ad2/0x20ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 241 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4470000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24305664 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 241 handle_osd_map epochs [241,242], i have 241, src has [1,242]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 242 ms_handle_reset con 0x5607b60b7800 session 0x5607b48e05a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24305664 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 242 ms_handle_reset con 0x5607b4aee400 session 0x5607b3f1da40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 242 ms_handle_reset con 0x5607b60b7c00 session 0x5607b4acf860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 243 ms_handle_reset con 0x5607b4af1c00 session 0x5607b33163c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 243 ms_handle_reset con 0x5607b4aef800 session 0x5607b4470d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 24272896 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 24272896 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 244 ms_handle_reset con 0x5607b1ff2000 session 0x5607b2449e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24231936 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1873850 data_alloc: 234881024 data_used: 10387456
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 244 heartbeat osd_stat(store_statfs(0x4f91a6000/0x0/0x4ffc00000, data 0x1f57e21/0x20b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24231936 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 244 ms_handle_reset con 0x5607b4aee400 session 0x5607b2d72f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 244 ms_handle_reset con 0x5607b60b7800 session 0x5607b539b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24231936 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 245 ms_handle_reset con 0x5607b60b7c00 session 0x5607b4b2d0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 24240128 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.823098183s of 10.239237785s, submitted: 86
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 246 ms_handle_reset con 0x5607b60b8400 session 0x5607b48e1c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 246 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3eec780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f91a0000/0x0/0x4ffc00000, data 0x1f5b6b5/0x20bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 246 ms_handle_reset con 0x5607b4af1c00 session 0x5607b2d73e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 24240128 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 24477696 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 246 handle_osd_map epochs [246,247], i have 246, src has [1,247]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 247 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 247 ms_handle_reset con 0x5607b4aef800 session 0x5607b4a79c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 247 ms_handle_reset con 0x5607b4aee400 session 0x5607b539be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1934996 data_alloc: 234881024 data_used: 10412032
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118407168 unmapped: 22855680 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 247 handle_osd_map epochs [247,248], i have 247, src has [1,248]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 247 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 248 ms_handle_reset con 0x5607b4aee400 session 0x5607b3f73e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 248 ms_handle_reset con 0x5607b1ff2000 session 0x5607b63f1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 22700032 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 248 handle_osd_map epochs [248,249], i have 248, src has [1,249]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 249 heartbeat osd_stat(store_statfs(0x4f8b10000/0x0/0x4ffc00000, data 0x25e911b/0x274d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 22700032 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 249 ms_handle_reset con 0x5607b60b6c00 session 0x5607b4aceb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 249 ms_handle_reset con 0x5607b4aef800 session 0x5607b4b2da40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 22700032 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 249 ms_handle_reset con 0x5607b60b5400 session 0x5607b5b14b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 249 ms_handle_reset con 0x5607b60b5c00 session 0x5607b3fa8f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 249 heartbeat osd_stat(store_statfs(0x4f8b0c000/0x0/0x4ffc00000, data 0x25eabaa/0x2750000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 249 ms_handle_reset con 0x5607b4af1c00 session 0x5607b539a000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 250 ms_handle_reset con 0x5607b4aee400 session 0x5607b248c960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 25698304 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 250 ms_handle_reset con 0x5607b1ff2000 session 0x5607b20bc960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1777920 data_alloc: 218103808 data_used: 6156288
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 250 handle_osd_map epochs [250,251], i have 250, src has [1,251]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 251 ms_handle_reset con 0x5607b4aef800 session 0x5607b24ca1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 115572736 unmapped: 25690112 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 251 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3282960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 251 ms_handle_reset con 0x5607b31a8c00 session 0x5607b24ba960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 251 ms_handle_reset con 0x5607b4aee400 session 0x5607b33145a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 25665536 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 252 heartbeat osd_stat(store_statfs(0x4f99ba000/0x0/0x4ffc00000, data 0x173a381/0x18a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 253 ms_handle_reset con 0x5607b60b5c00 session 0x5607b423c5a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 253 ms_handle_reset con 0x5607b60b5400 session 0x5607b4b2c780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 25608192 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.221049309s of 10.935215950s, submitted: 202
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 115662848 unmapped: 25600000 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 115662848 unmapped: 25600000 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1790364 data_alloc: 218103808 data_used: 6164480
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f99b1000/0x0/0x4ffc00000, data 0x17415ee/0x18ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 115662848 unmapped: 25600000 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 256 ms_handle_reset con 0x5607b60b6c00 session 0x5607b4a79a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 256 ms_handle_reset con 0x5607b60b6c00 session 0x5607b5b145a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 24543232 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24502272 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 257 heartbeat osd_stat(store_statfs(0x4f99ac000/0x0/0x4ffc00000, data 0x1744d3e/0x18b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 23355392 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 23240704 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1796842 data_alloc: 218103808 data_used: 6160384
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 23240704 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 23240704 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 257 heartbeat osd_stat(store_statfs(0x4f99aa000/0x0/0x4ffc00000, data 0x17477bd/0x18b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 23224320 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 23224320 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 23224320 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1801016 data_alloc: 218103808 data_used: 6168576
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 23224320 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 258 heartbeat osd_stat(store_statfs(0x4f99a6000/0x0/0x4ffc00000, data 0x1749220/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 23224320 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 23224320 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 23224320 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 23224320 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1801016 data_alloc: 218103808 data_used: 6168576
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.626913071s of 17.109920502s, submitted: 156
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 258 heartbeat osd_stat(store_statfs(0x4f99a6000/0x0/0x4ffc00000, data 0x1749220/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 23207936 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 23207936 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 258 heartbeat osd_stat(store_statfs(0x4f99a6000/0x0/0x4ffc00000, data 0x1749220/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 23207936 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 258 heartbeat osd_stat(store_statfs(0x4f99a6000/0x0/0x4ffc00000, data 0x1749220/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118071296 unmapped: 23191552 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 258 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4b1be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118071296 unmapped: 23191552 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1800579 data_alloc: 218103808 data_used: 6168576
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118079488 unmapped: 23183360 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 259 ms_handle_reset con 0x5607b4aee400 session 0x5607b3fa9860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118095872 unmapped: 23166976 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 259 ms_handle_reset con 0x5607b60b5400 session 0x5607b3fa94a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118095872 unmapped: 23166976 heap: 141262848 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 260 ms_handle_reset con 0x5607b60b5c00 session 0x5607b4a78780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 260 ms_handle_reset con 0x5607b4aee400 session 0x5607b24bbc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 260 ms_handle_reset con 0x5607b1ff2000 session 0x5607b48e1860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 260 heartbeat osd_stat(store_statfs(0x4f99a2000/0x0/0x4ffc00000, data 0x174bd9d/0x18bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127082496 unmapped: 16605184 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 260 ms_handle_reset con 0x5607b60b5400 session 0x5607b214c3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 260 ms_handle_reset con 0x5607b60b6c00 session 0x5607b4ac1c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 15548416 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 261 ms_handle_reset con 0x5607b60b7800 session 0x5607b4b2c780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1937045 data_alloc: 234881024 data_used: 12238848
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b60b7800 session 0x5607b20bda40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b60b8400 session 0x5607b3eec3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 15548416 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4ef3680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b4aee400 session 0x5607b44ae1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b60b5400 session 0x5607b423c780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 15548416 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.177992821s of 11.485923767s, submitted: 40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4ef25a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b60b5400 session 0x5607b4a78d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125583360 unmapped: 18104320 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125583360 unmapped: 18104320 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 heartbeat osd_stat(store_statfs(0x4f8dc4000/0x0/0x4ffc00000, data 0x2324106/0x249a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125583360 unmapped: 18104320 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1937265 data_alloc: 234881024 data_used: 12247040
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 heartbeat osd_stat(store_statfs(0x4f8dc4000/0x0/0x4ffc00000, data 0x2324106/0x249a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b4aee400 session 0x5607b4ef30e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b60b8400 session 0x5607b3314960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b60b7800 session 0x5607b3fa8d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b60b7800 session 0x5607b422eb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4ace000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125583360 unmapped: 18104320 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b4aee400 session 0x5607b423d2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b60b5400 session 0x5607b423cf00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b60b8400 session 0x5607b4ef32c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b60b8400 session 0x5607b4ef3a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b1ff2000 session 0x5607b422ef00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 ms_handle_reset con 0x5607b4aee400 session 0x5607b422fc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 18079744 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 18079744 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 18079744 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 18079744 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1970105 data_alloc: 234881024 data_used: 12279808
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b60b5400 session 0x5607b44aa3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 18071552 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f8a3d000/0x0/0x4ffc00000, data 0x26a7b8c/0x2820000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b60b8000 session 0x5607b4ace960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f8a3d000/0x0/0x4ffc00000, data 0x26a7b8c/0x2820000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 18071552 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b60b8000 session 0x5607b24ca3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 18071552 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b1ff2000 session 0x5607b24cb4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.970448494s of 11.088361740s, submitted: 26
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b4aee400 session 0x5607b33172c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b60b5400 session 0x5607b3314780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 18071552 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b53fe000 session 0x5607b3316780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f8a3d000/0x0/0x4ffc00000, data 0x26a7baf/0x2821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4b2c1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 18071552 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b4aee400 session 0x5607b3f72f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1976574 data_alloc: 234881024 data_used: 12296192
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125624320 unmapped: 18063360 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 18055168 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b60b7800 session 0x5607b20bd860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b60b6c00 session 0x5607b248ba40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127221760 unmapped: 16465920 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b54ff800 session 0x5607b4ac1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 16441344 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 16441344 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2025782 data_alloc: 234881024 data_used: 19316736
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f8a3c000/0x0/0x4ffc00000, data 0x26a7b9c/0x2821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4ac0780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b4aee400 session 0x5607b4ac1e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 16441344 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 16441344 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 16441344 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f8a3d000/0x0/0x4ffc00000, data 0x26a7b8c/0x2820000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 16441344 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.895448685s of 10.970251083s, submitted: 21
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f8a3d000/0x0/0x4ffc00000, data 0x26a7b8c/0x2820000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 16408576 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2024902 data_alloc: 234881024 data_used: 19308544
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 16408576 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 16408576 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f8a3d000/0x0/0x4ffc00000, data 0x26a7b8c/0x2820000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131383296 unmapped: 12304384 heap: 143687680 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 ms_handle_reset con 0x5607b60b6c00 session 0x5607b3f425a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 21340160 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f77f4000/0x0/0x4ffc00000, data 0x38f1b8c/0x3a6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b60b7800 session 0x5607b3f42d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 22339584 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2194346 data_alloc: 234881024 data_used: 22712320
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 22339584 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f77e9000/0x0/0x4ffc00000, data 0x38f976b/0x3a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 22339584 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 22339584 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f77e9000/0x0/0x4ffc00000, data 0x38f976b/0x3a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b54ff400 session 0x5607b3f42000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 22339584 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 22339584 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2194346 data_alloc: 234881024 data_used: 22712320
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.388216972s of 11.668557167s, submitted: 53
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 22339584 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 22339584 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f77e9000/0x0/0x4ffc00000, data 0x38f976b/0x3a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b4aee400 session 0x5607b20a7c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 22339584 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 22339584 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f77ea000/0x0/0x4ffc00000, data 0x38f976b/0x3a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b1ff2000 session 0x5607b539af00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b60b6c00 session 0x5607b5b14780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 23404544 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2195701 data_alloc: 234881024 data_used: 22712320
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b60b5400 session 0x5607b3fa83c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b60b8000 session 0x5607b44aa3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 24788992 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b4aee400 session 0x5607b3283860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f77c6000/0x0/0x4ffc00000, data 0x391d76b/0x3a98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 24788992 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f7d7f000/0x0/0x4ffc00000, data 0x2f5575b/0x30cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 18014208 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f7d7f000/0x0/0x4ffc00000, data 0x2f5575b/0x30cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 17981440 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 17981440 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b60b8000 session 0x5607b4ac0b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b1ff2000 session 0x5607b48e0f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2172667 data_alloc: 251658240 data_used: 29437952
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b60b5400 session 0x5607b248a1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 17981440 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f7da3000/0x0/0x4ffc00000, data 0x2f3175b/0x30ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.435015678s of 10.512440681s, submitted: 17
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b60b6c00 session 0x5607b32852c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4b2c000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b4aee400 session 0x5607b4ef3c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 17973248 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 17973248 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b60b5400 session 0x5607b539b680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 17973248 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 ms_handle_reset con 0x5607b60b6c00 session 0x5607b539ad20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f7da4000/0x0/0x4ffc00000, data 0x2f316f9/0x30aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 264 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 265 ms_handle_reset con 0x5607b60b8000 session 0x5607b214dc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 24846336 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2023986 data_alloc: 234881024 data_used: 18845696
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 24846336 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 265 heartbeat osd_stat(store_statfs(0x4f87d0000/0x0/0x4ffc00000, data 0x25032ca/0x267d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 24846336 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 24846336 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 265 heartbeat osd_stat(store_statfs(0x4f87d0000/0x0/0x4ffc00000, data 0x25032ca/0x267d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 24846336 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128360448 unmapped: 24846336 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 265 ms_handle_reset con 0x5607b4aee400 session 0x5607b20bda40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2032029 data_alloc: 234881024 data_used: 20262912
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 129015808 unmapped: 24190976 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.160775185s of 10.302958488s, submitted: 39
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 265 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3316000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 129015808 unmapped: 24190976 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 266 heartbeat osd_stat(store_statfs(0x4f87d0000/0x0/0x4ffc00000, data 0x25032da/0x267e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 266 ms_handle_reset con 0x5607b60b6c00 session 0x5607b423d4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 266 ms_handle_reset con 0x5607b60b7800 session 0x5607b44ae1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 22233088 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 266 ms_handle_reset con 0x5607b453c400 session 0x5607b4a79e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 266 ms_handle_reset con 0x5607b1ff2000 session 0x5607b44aeb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 266 ms_handle_reset con 0x5607b60b8400 session 0x5607b248c1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 266 ms_handle_reset con 0x5607b53ffc00 session 0x5607b248c960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 18030592 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 266 ms_handle_reset con 0x5607b453c400 session 0x5607b4ac05a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132882432 unmapped: 20324352 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2111026 data_alloc: 234881024 data_used: 21340160
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132882432 unmapped: 20324352 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 266 ms_handle_reset con 0x5607b4aee400 session 0x5607b5b14d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 266 heartbeat osd_stat(store_statfs(0x4f7e7c000/0x0/0x4ffc00000, data 0x2e55d1a/0x2fd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 266 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4b1be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 266 ms_handle_reset con 0x5607b4aee400 session 0x5607b48e1e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 20267008 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 20267008 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 267 ms_handle_reset con 0x5607b453c400 session 0x5607b20a74a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 24707072 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 267 ms_handle_reset con 0x5607b53ffc00 session 0x5607b4b1a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 267 heartbeat osd_stat(store_statfs(0x4f8a4c000/0x0/0x4ffc00000, data 0x22848eb/0x2401000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 268 ms_handle_reset con 0x5607b60b8400 session 0x5607b3fa8960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 24698880 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1981742 data_alloc: 234881024 data_used: 12304384
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128507904 unmapped: 24698880 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 268 ms_handle_reset con 0x5607b31a9000 session 0x5607b3282f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 268 ms_handle_reset con 0x5607b4aeec00 session 0x5607b422ed20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.608153343s of 10.195296288s, submitted: 133
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 33873920 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 268 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4a78d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 33873920 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 269 ms_handle_reset con 0x5607b453c400 session 0x5607b5b15a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118620160 unmapped: 34586624 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 269 ms_handle_reset con 0x5607b4aee400 session 0x5607b4ace1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 269 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3f73860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 269 heartbeat osd_stat(store_statfs(0x4f95e4000/0x0/0x4ffc00000, data 0x16e9f57/0x1869000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 269 ms_handle_reset con 0x5607b31a9000 session 0x5607b48e0b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 269 ms_handle_reset con 0x5607b453c400 session 0x5607b3eed860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 269 ms_handle_reset con 0x5607b4aeec00 session 0x5607b44ae000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118611968 unmapped: 34594816 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 269 ms_handle_reset con 0x5607b53ffc00 session 0x5607b44aed20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1844300 data_alloc: 218103808 data_used: 835584
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 269 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4151c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118595584 unmapped: 34611200 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 269 heartbeat osd_stat(store_statfs(0x4f95e4000/0x0/0x4ffc00000, data 0x16e9f57/0x1869000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 269 ms_handle_reset con 0x5607b4aeec00 session 0x5607b5b14b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 118611968 unmapped: 34594816 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 269 handle_osd_map epochs [269,270], i have 269, src has [1,270]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 269 ms_handle_reset con 0x5607b60b6c00 session 0x5607b63f1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31113216 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31113216 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31113216 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1929420 data_alloc: 234881024 data_used: 12472320
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31113216 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 270 heartbeat osd_stat(store_statfs(0x4f95e3000/0x0/0x4ffc00000, data 0x16eb948/0x186a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31113216 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31113216 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31113216 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 270 heartbeat osd_stat(store_statfs(0x4f95e3000/0x0/0x4ffc00000, data 0x16eb948/0x186a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31113216 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1929420 data_alloc: 234881024 data_used: 12472320
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31113216 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 122093568 unmapped: 31113216 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.638618469s of 15.897966385s, submitted: 70
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 129081344 unmapped: 24125440 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 270 heartbeat osd_stat(store_statfs(0x4fa0f8000/0x0/0x4ffc00000, data 0x1bf7948/0x1d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130580480 unmapped: 22626304 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130883584 unmapped: 22323200 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2044416 data_alloc: 234881024 data_used: 14364672
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131047424 unmapped: 22159360 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 270 heartbeat osd_stat(store_statfs(0x4f980b000/0x0/0x4ffc00000, data 0x24e4948/0x2663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131047424 unmapped: 22159360 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 270 ms_handle_reset con 0x5607b60b7800 session 0x5607b33163c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 270 heartbeat osd_stat(store_statfs(0x4f980b000/0x0/0x4ffc00000, data 0x24e4948/0x2663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 271 ms_handle_reset con 0x5607b31a8400 session 0x5607b4ace5a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130162688 unmapped: 23044096 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 271 heartbeat osd_stat(store_statfs(0x4f9807000/0x0/0x4ffc00000, data 0x24e64c5/0x2666000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 272 ms_handle_reset con 0x5607b5ea6000 session 0x5607b3f1da40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130162688 unmapped: 23044096 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130162688 unmapped: 23044096 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2057738 data_alloc: 234881024 data_used: 14639104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130170880 unmapped: 23035904 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 273 heartbeat osd_stat(store_statfs(0x4f9801000/0x0/0x4ffc00000, data 0x24e9c13/0x266c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 273 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4470000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 23019520 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 273 ms_handle_reset con 0x5607b4aeec00 session 0x5607b32830e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.154915810s of 10.041896820s, submitted: 138
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 23019520 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 273 handle_osd_map epochs [273,274], i have 273, src has [1,274]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 273 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 22986752 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 274 ms_handle_reset con 0x5607b60b6c00 session 0x5607b63f0d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 274 ms_handle_reset con 0x5607b60b7800 session 0x5607b4b2d2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 274 ms_handle_reset con 0x5607b539fc00 session 0x5607b423d0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 274 heartbeat osd_stat(store_statfs(0x4f97f9000/0x0/0x4ffc00000, data 0x24ee91e/0x2674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130228224 unmapped: 22978560 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 275 ms_handle_reset con 0x5607b1ff2000 session 0x5607b539bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2071489 data_alloc: 234881024 data_used: 14647296
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 22970368 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 275 ms_handle_reset con 0x5607b422a400 session 0x5607b3f5a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 22970368 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 275 ms_handle_reset con 0x5607b4163800 session 0x5607b539a780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 275 handle_osd_map epochs [277,277], i have 275, src has [1,277]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 275 handle_osd_map epochs [276,277], i have 275, src has [1,277]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 22970368 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 277 ms_handle_reset con 0x5607b4aeec00 session 0x5607b3f5a780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 277 heartbeat osd_stat(store_statfs(0x4f97f5000/0x0/0x4ffc00000, data 0x24f04b7/0x2677000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 277 ms_handle_reset con 0x5607b1ff2000 session 0x5607b539bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 277 ms_handle_reset con 0x5607b4163800 session 0x5607b423d0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 277 ms_handle_reset con 0x5607b422a400 session 0x5607b4b2d2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 22953984 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 ms_handle_reset con 0x5607b4aeec00 session 0x5607b63f0d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 ms_handle_reset con 0x5607b539f800 session 0x5607b63f1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4151c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 22601728 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2109132 data_alloc: 234881024 data_used: 14655488
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 ms_handle_reset con 0x5607b4163800 session 0x5607b3eed860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 ms_handle_reset con 0x5607b422a400 session 0x5607b5b15a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 22601728 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 ms_handle_reset con 0x5607b4aee000 session 0x5607b44af0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 ms_handle_reset con 0x5607b539e800 session 0x5607b3eec3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 heartbeat osd_stat(store_statfs(0x4f94ab000/0x0/0x4ffc00000, data 0x2837684/0x29c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 22601728 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 22601728 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.326626778s of 10.692263603s, submitted: 56
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3284d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 ms_handle_reset con 0x5607b4163800 session 0x5607b4acf680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 ms_handle_reset con 0x5607b422a400 session 0x5607b63f01e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 22601728 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 ms_handle_reset con 0x5607b4aeec00 session 0x5607b3282f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 ms_handle_reset con 0x5607b4aee000 session 0x5607b63f10e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 22601728 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 ms_handle_reset con 0x5607b1ff2000 session 0x5607b32e45a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2112609 data_alloc: 234881024 data_used: 14655488
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 279 ms_handle_reset con 0x5607b4aeec00 session 0x5607b4b1b680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 22568960 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 22568960 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 279 heartbeat osd_stat(store_statfs(0x4f94a7000/0x0/0x4ffc00000, data 0x283922c/0x29c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 279 ms_handle_reset con 0x5607b60b6c00 session 0x5607b248a780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 280 ms_handle_reset con 0x5607b539fc00 session 0x5607b422e780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132161536 unmapped: 21045248 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 280 ms_handle_reset con 0x5607b60b7800 session 0x5607b4a79e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 280 ms_handle_reset con 0x5607b60b7800 session 0x5607b423d4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132161536 unmapped: 21045248 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 280 ms_handle_reset con 0x5607b4aeec00 session 0x5607b44af860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132194304 unmapped: 21012480 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 280 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3316000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 280 handle_osd_map epochs [280,281], i have 280, src has [1,281]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2143767 data_alloc: 234881024 data_used: 16871424
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 281 ms_handle_reset con 0x5607b539fc00 session 0x5607b4ef3c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 281 heartbeat osd_stat(store_statfs(0x4f949f000/0x0/0x4ffc00000, data 0x283c9d0/0x29ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 21004288 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 282 ms_handle_reset con 0x5607b60b6c00 session 0x5607b44ab2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 282 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4471680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 282 heartbeat osd_stat(store_statfs(0x4f9499000/0x0/0x4ffc00000, data 0x283eabd/0x29d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132317184 unmapped: 20889600 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 283 heartbeat osd_stat(store_statfs(0x4f9499000/0x0/0x4ffc00000, data 0x283eabd/0x29d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 283 ms_handle_reset con 0x5607b4aeec00 session 0x5607b4470780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132366336 unmapped: 20840448 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.153810501s of 10.051526070s, submitted: 48
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 284 ms_handle_reset con 0x5607b60b7c00 session 0x5607b32832c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132407296 unmapped: 20799488 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 284 handle_osd_map epochs [285,285], i have 285, src has [1,285]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 285 ms_handle_reset con 0x5607b60b7800 session 0x5607b4ac1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 285 ms_handle_reset con 0x5607b539fc00 session 0x5607b3283e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132440064 unmapped: 20766720 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2164680 data_alloc: 234881024 data_used: 16949248
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 285 ms_handle_reset con 0x5607b1ff2000 session 0x5607b248b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 20725760 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 286 ms_handle_reset con 0x5607b4aeec00 session 0x5607b248b4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 286 heartbeat osd_stat(store_statfs(0x4f948a000/0x0/0x4ffc00000, data 0x2b5085b/0x29e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132530176 unmapped: 20676608 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 286 heartbeat osd_stat(store_statfs(0x4f948a000/0x0/0x4ffc00000, data 0x2b5085b/0x29e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 286 handle_osd_map epochs [287,287], i have 287, src has [1,287]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 287 ms_handle_reset con 0x5607b60b7800 session 0x5607b4acf2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 287 ms_handle_reset con 0x5607b60b7c00 session 0x5607b31c9e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 20643840 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 18120704 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 288 ms_handle_reset con 0x5607b60b8400 session 0x5607b32823c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 288 ms_handle_reset con 0x5607b1ff2000 session 0x5607b5b141e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134504448 unmapped: 18702336 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2214084 data_alloc: 234881024 data_used: 17108992
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134709248 unmapped: 18497536 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 18407424 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 289 ms_handle_reset con 0x5607b4aeec00 session 0x5607b539b680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 289 heartbeat osd_stat(store_statfs(0x4f927e000/0x0/0x4ffc00000, data 0x2f84b18/0x2bf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135266304 unmapped: 17940480 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.245911598s of 10.135710716s, submitted: 107
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135929856 unmapped: 17276928 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 290 ms_handle_reset con 0x5607b60b7800 session 0x5607b4b1ad20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135929856 unmapped: 17276928 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 290 ms_handle_reset con 0x5607b60b7c00 session 0x5607b4ac0780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2262323 data_alloc: 234881024 data_used: 19886080
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135979008 unmapped: 17227776 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 291 heartbeat osd_stat(store_statfs(0x4f91c8000/0x0/0x4ffc00000, data 0x303c133/0x2ca6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136134656 unmapped: 17072128 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136134656 unmapped: 17072128 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 292 ms_handle_reset con 0x5607b54ff400 session 0x5607b2d72b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 16908288 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 16908288 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2271234 data_alloc: 234881024 data_used: 19902464
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 294 heartbeat osd_stat(store_statfs(0x4f91bf000/0x0/0x4ffc00000, data 0x3041361/0x2cae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 16900096 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 294 handle_osd_map epochs [295,296], i have 294, src has [1,296]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 296 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4471a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 16809984 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 16809984 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.272683620s of 10.116564751s, submitted: 122
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 297 ms_handle_reset con 0x5607b4aeec00 session 0x5607b3f73680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136462336 unmapped: 16744448 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 297 ms_handle_reset con 0x5607b60b7800 session 0x5607b3fa83c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 297 heartbeat osd_stat(store_statfs(0x4f91b4000/0x0/0x4ffc00000, data 0x30482a3/0x2cb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 16703488 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2282384 data_alloc: 234881024 data_used: 19898368
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 16703488 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 16695296 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136519680 unmapped: 16687104 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 299 heartbeat osd_stat(store_statfs(0x4f91af000/0x0/0x4ffc00000, data 0x304b913/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135716864 unmapped: 17489920 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 17481728 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2286764 data_alloc: 234881024 data_used: 19902464
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 17481728 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19251 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 17481728 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 300 heartbeat osd_stat(store_statfs(0x4f91ac000/0x0/0x4ffc00000, data 0x304d396/0x2cc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 17457152 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.348552704s of 10.201882362s, submitted: 60
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 138846208 unmapped: 14360576 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 138846208 unmapped: 14360576 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2319354 data_alloc: 234881024 data_used: 21635072
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 138846208 unmapped: 14360576 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 300 ms_handle_reset con 0x5607b60b7000 session 0x5607b3f425a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 300 ms_handle_reset con 0x5607b60b7400 session 0x5607b48e1e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 300 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3f1cd20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136683520 unmapped: 16523264 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 ms_handle_reset con 0x5607b60b7c00 session 0x5607b20bd4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 heartbeat osd_stat(store_statfs(0x4f8f7e000/0x0/0x4ffc00000, data 0x3279f26/0x2eef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136683520 unmapped: 16523264 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 heartbeat osd_stat(store_statfs(0x4f8f7e000/0x0/0x4ffc00000, data 0x3279f26/0x2eef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136683520 unmapped: 16523264 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 ms_handle_reset con 0x5607b4aeec00 session 0x5607b31c9c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 ms_handle_reset con 0x5607b60b7000 session 0x5607b4b1ab40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 ms_handle_reset con 0x5607b60b7800 session 0x5607b3283680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 ms_handle_reset con 0x5607b1ff2000 session 0x5607b423d4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 ms_handle_reset con 0x5607b60b7800 session 0x5607b422ef00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 ms_handle_reset con 0x5607b4aeec00 session 0x5607b4b2d4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136699904 unmapped: 16506880 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2294995 data_alloc: 234881024 data_used: 21639168
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 ms_handle_reset con 0x5607b4163800 session 0x5607b3284780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 ms_handle_reset con 0x5607b422a400 session 0x5607b3fa8960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135888896 unmapped: 17317888 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 ms_handle_reset con 0x5607b1ff2000 session 0x5607b24bb2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 ms_handle_reset con 0x5607b4163800 session 0x5607b4470b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 ms_handle_reset con 0x5607b422a400 session 0x5607b4b1b680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 ms_handle_reset con 0x5607b60b7800 session 0x5607b3282f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135741440 unmapped: 17465344 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 302 ms_handle_reset con 0x5607b4aeec00 session 0x5607b5b14960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 302 heartbeat osd_stat(store_statfs(0x4f97a2000/0x0/0x4ffc00000, data 0x2829af7/0x26ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 302 ms_handle_reset con 0x5607b31a9000 session 0x5607b4aced20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 302 ms_handle_reset con 0x5607b453c400 session 0x5607b423c1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135741440 unmapped: 17465344 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 303 ms_handle_reset con 0x5607b4163800 session 0x5607b248c960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 303 heartbeat osd_stat(store_statfs(0x4f97a3000/0x0/0x4ffc00000, data 0x25206c1/0x26ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 303 ms_handle_reset con 0x5607b1ff2000 session 0x5607b423d0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135741440 unmapped: 17465344 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.853845596s of 11.389416695s, submitted: 134
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 303 ms_handle_reset con 0x5607b422a400 session 0x5607b3f5a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135741440 unmapped: 17465344 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 303 heartbeat osd_stat(store_statfs(0x4f97a3000/0x0/0x4ffc00000, data 0x25206c1/0x26ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1895378 data_alloc: 218103808 data_used: 954368
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 303 ms_handle_reset con 0x5607b1ff2000 session 0x5607b48e1860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 303 heartbeat osd_stat(store_statfs(0x4f97a3000/0x0/0x4ffc00000, data 0x25206c1/0x26ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 28483584 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 28483584 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 28483584 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 304 ms_handle_reset con 0x5607b4163800 session 0x5607b4b1a3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 304 ms_handle_reset con 0x5607b453c400 session 0x5607b248cd20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124739584 unmapped: 28467200 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 304 heartbeat osd_stat(store_statfs(0x4fb0c8000/0x0/0x4ffc00000, data 0xbfb14c/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 304 handle_osd_map epochs [305,305], i have 305, src has [1,305]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 305 ms_handle_reset con 0x5607b60b7800 session 0x5607b48e0f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 305 ms_handle_reset con 0x5607b60b7000 session 0x5607b248b0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 305 ms_handle_reset con 0x5607b31a9000 session 0x5607b3315680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 305 ms_handle_reset con 0x5607b60b7000 session 0x5607b539a1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124772352 unmapped: 28434432 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 305 ms_handle_reset con 0x5607b1ff2000 session 0x5607b539a000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1906145 data_alloc: 218103808 data_used: 962560
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 305 ms_handle_reset con 0x5607b4163800 session 0x5607b4a794a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 28401664 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 305 heartbeat osd_stat(store_statfs(0x4f9f24000/0x0/0x4ffc00000, data 0xbfcd6f/0xda8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 305 ms_handle_reset con 0x5607b453c400 session 0x5607b24ca5a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 28401664 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 28401664 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 28401664 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 305 heartbeat osd_stat(store_statfs(0x4f9f27000/0x0/0x4ffc00000, data 0xbfcd0d/0xda7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 28401664 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1903065 data_alloc: 218103808 data_used: 966656
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.212158203s of 10.735061646s, submitted: 119
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 305 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3f1da40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 305 ms_handle_reset con 0x5607b31a9000 session 0x5607b248a000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124821504 unmapped: 28385280 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124837888 unmapped: 28368896 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 306 ms_handle_reset con 0x5607b4163800 session 0x5607b20bda40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f9f23000/0x0/0x4ffc00000, data 0xbfe770/0xdaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124837888 unmapped: 28368896 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124837888 unmapped: 28368896 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 306 ms_handle_reset con 0x5607b60b7000 session 0x5607b319ef00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 306 ms_handle_reset con 0x5607b60b7800 session 0x5607b319f860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 306 ms_handle_reset con 0x5607b60b7c00 session 0x5607b3316780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124846080 unmapped: 28360704 heap: 153206784 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 306 ms_handle_reset con 0x5607b1ff2000 session 0x5607b319e3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1962036 data_alloc: 218103808 data_used: 974848
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 306 ms_handle_reset con 0x5607b31a9000 session 0x5607b319fa40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124960768 unmapped: 31924224 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124968960 unmapped: 31916032 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 306 handle_osd_map epochs [307,307], i have 307, src has [1,307]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 307 ms_handle_reset con 0x5607b4163800 session 0x5607b319e1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 307 heartbeat osd_stat(store_statfs(0x4f98cc000/0x0/0x4ffc00000, data 0x1254790/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124993536 unmapped: 31891456 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 307 ms_handle_reset con 0x5607b60b7000 session 0x5607b4ac1860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124993536 unmapped: 31891456 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124993536 unmapped: 31891456 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1966553 data_alloc: 218103808 data_used: 991232
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124993536 unmapped: 31891456 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 307 ms_handle_reset con 0x5607b1ff2000 session 0x5607b24bb4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124993536 unmapped: 31891456 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 307 heartbeat osd_stat(store_statfs(0x4f98c8000/0x0/0x4ffc00000, data 0x125630d/0x1405000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 307 handle_osd_map epochs [308,308], i have 308, src has [1,308]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.142145157s of 11.463109970s, submitted: 52
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 308 heartbeat osd_stat(store_statfs(0x4f98c8000/0x0/0x4ffc00000, data 0x125630d/0x1405000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 308 ms_handle_reset con 0x5607b31a9000 session 0x5607b24baf00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124993536 unmapped: 31891456 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 124993536 unmapped: 31891456 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 309 ms_handle_reset con 0x5607b4163800 session 0x5607b5b150e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125001728 unmapped: 31883264 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 309 ms_handle_reset con 0x5607b60b7c00 session 0x5607b3314780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 309 ms_handle_reset con 0x5607b53fe000 session 0x5607b423c000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1973623 data_alloc: 218103808 data_used: 991232
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 309 ms_handle_reset con 0x5607b1ff2000 session 0x5607b4acf680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 31547392 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 309 ms_handle_reset con 0x5607b60b7c00 session 0x5607b248cb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 309 ms_handle_reset con 0x5607b60b4000 session 0x5607b3f42780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 31539200 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 309 heartbeat osd_stat(store_statfs(0x4f989e000/0x0/0x4ffc00000, data 0x127da07/0x142f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 309 ms_handle_reset con 0x5607b60b4400 session 0x5607b422e3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 31531008 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 309 ms_handle_reset con 0x5607b60b4800 session 0x5607b20a7e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 29925376 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 309 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3f1d0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 309 ms_handle_reset con 0x5607b60b4400 session 0x5607b825a780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 29917184 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 309 handle_osd_map epochs [310,310], i have 310, src has [1,310]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 310 ms_handle_reset con 0x5607b60b7c00 session 0x5607b825af00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2029462 data_alloc: 218103808 data_used: 7651328
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 310 ms_handle_reset con 0x5607b60b4000 session 0x5607b3283a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 29868032 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 310 heartbeat osd_stat(store_statfs(0x4f989c000/0x0/0x4ffc00000, data 0x127f5c8/0x1431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 310 ms_handle_reset con 0x5607b60b5000 session 0x5607b4ac10e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 29868032 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.705282211s of 10.047419548s, submitted: 65
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 310 ms_handle_reset con 0x5607b1ff2000 session 0x5607b63f05a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 311 ms_handle_reset con 0x5607b60b4000 session 0x5607b248a1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127025152 unmapped: 29859840 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 311 ms_handle_reset con 0x5607b60b4400 session 0x5607b63f10e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 29851648 heap: 156884992 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 311 ms_handle_reset con 0x5607b60b7c00 session 0x5607b4ac0960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 311 ms_handle_reset con 0x5607b60b4c00 session 0x5607b44aa960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 311 ms_handle_reset con 0x5607b1ff2000 session 0x5607b539bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 311 ms_handle_reset con 0x5607b60b4000 session 0x5607b2d72f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 33054720 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2134959 data_alloc: 218103808 data_used: 7647232
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 311 ms_handle_reset con 0x5607b60b4400 session 0x5607b20a63c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 33054720 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 311 ms_handle_reset con 0x5607b60b7c00 session 0x5607b4b1a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 312 ms_handle_reset con 0x5607b60b5c00 session 0x5607b825bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 313 ms_handle_reset con 0x5607b60b4000 session 0x5607b63f14a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 33480704 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 313 heartbeat osd_stat(store_statfs(0x4f85de000/0x0/0x4ffc00000, data 0x2536859/0x26ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 313 heartbeat osd_stat(store_statfs(0x4f85de000/0x0/0x4ffc00000, data 0x2536859/0x26ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 313 ms_handle_reset con 0x5607b1ff2000 session 0x5607b5b150e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 33480704 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 314 heartbeat osd_stat(store_statfs(0x4f8210000/0x0/0x4ffc00000, data 0x29023d6/0x2abb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 314 ms_handle_reset con 0x5607b60b5800 session 0x5607b4ac03c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 314 ms_handle_reset con 0x5607b60b7c00 session 0x5607b3f73860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 314 ms_handle_reset con 0x5607b60b4400 session 0x5607b319fa40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133136384 unmapped: 29048832 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133226496 unmapped: 28958720 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2277742 data_alloc: 218103808 data_used: 7675904
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 31555584 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 315 ms_handle_reset con 0x5607b1ff2000 session 0x5607b2449c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 315 heartbeat osd_stat(store_statfs(0x4f7a23000/0x0/0x4ffc00000, data 0x30f23d6/0x32ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 315 ms_handle_reset con 0x5607b60b5800 session 0x5607b248cd20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 315 ms_handle_reset con 0x5607b60b4000 session 0x5607b4b1be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 31547392 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.962969780s of 10.097908020s, submitted: 186
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 315 ms_handle_reset con 0x5607b3302400 session 0x5607b3f43c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 315 ms_handle_reset con 0x5607b5dff000 session 0x5607b825b680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 30302208 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 316 ms_handle_reset con 0x5607b24b8400 session 0x5607b44ab2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 316 ms_handle_reset con 0x5607b1ff2000 session 0x5607b48e1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 316 ms_handle_reset con 0x5607b60b7c00 session 0x5607b4b1a3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 316 ms_handle_reset con 0x5607b3302400 session 0x5607b4a781e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 316 ms_handle_reset con 0x5607b60b4000 session 0x5607b3315c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 30285824 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 316 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3316960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 316 ms_handle_reset con 0x5607b24b8400 session 0x5607b4b1a3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 30269440 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2329815 data_alloc: 218103808 data_used: 8437760
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 30269440 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 316 ms_handle_reset con 0x5607b3302400 session 0x5607b2449c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 316 ms_handle_reset con 0x5607b60b5800 session 0x5607b32e41e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 317 ms_handle_reset con 0x5607b5dff400 session 0x5607b44afa40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 317 ms_handle_reset con 0x5607b60b7c00 session 0x5607b4ac03c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 30228480 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 317 heartbeat osd_stat(store_statfs(0x4f72bc000/0x0/0x4ffc00000, data 0x34446a1/0x3601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 317 ms_handle_reset con 0x5607b1ff2000 session 0x5607b3283a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 317 ms_handle_reset con 0x5607b3302400 session 0x5607b4ac01e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132030464 unmapped: 30154752 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 318 ms_handle_reset con 0x5607b60b5800 session 0x5607b4b2c1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 318 ms_handle_reset con 0x5607b24b8400 session 0x5607b3f1d0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131948544 unmapped: 30236672 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 318 heartbeat osd_stat(store_statfs(0x4f72b6000/0x0/0x4ffc00000, data 0x3449272/0x3607000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131948544 unmapped: 30236672 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2332777 data_alloc: 218103808 data_used: 8454144
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131948544 unmapped: 30236672 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 318 handle_osd_map epochs [318,319], i have 318, src has [1,319]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 30220288 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 319 ms_handle_reset con 0x5607b1ff2000 session 0x5607b2449e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 30220288 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.006084442s of 11.782385826s, submitted: 134
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 320 ms_handle_reset con 0x5607b3302400 session 0x5607b423da40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131973120 unmapped: 30212096 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 320 ms_handle_reset con 0x5607b60b5800 session 0x5607b3fa9680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 320 ms_handle_reset con 0x5607b60b7c00 session 0x5607b4a79e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 320 heartbeat osd_stat(store_statfs(0x4f72af000/0x0/0x4ffc00000, data 0x344c88a/0x360d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 131973120 unmapped: 30212096 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2339045 data_alloc: 218103808 data_used: 8454144
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 320 ms_handle_reset con 0x5607b5dff800 session 0x5607b422ed20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132005888 unmapped: 30179328 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 320 ms_handle_reset con 0x5607b3302400 session 0x5607b3283860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132022272 unmapped: 30162944 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 321 ms_handle_reset con 0x5607b60b5800 session 0x5607b3fa94a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133095424 unmapped: 29089792 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 322 ms_handle_reset con 0x5607b60b7c00 session 0x5607b4acfa40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133111808 unmapped: 29073408 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 322 heartbeat osd_stat(store_statfs(0x4f72a9000/0x0/0x4ffc00000, data 0x344febe/0x3613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133111808 unmapped: 29073408 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2362490 data_alloc: 234881024 data_used: 10633216
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133111808 unmapped: 29073408 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 322 heartbeat osd_stat(store_statfs(0x4f72a9000/0x0/0x4ffc00000, data 0x344febe/0x3613000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133111808 unmapped: 29073408 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 322 ms_handle_reset con 0x5607b31a9000 session 0x5607b825a1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 322 ms_handle_reset con 0x5607b4aef800 session 0x5607b20a63c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 322 ms_handle_reset con 0x5607b4163800 session 0x5607b4a78960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 322 ms_handle_reset con 0x5607b31a9000 session 0x5607b4ac05a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133160960 unmapped: 29024256 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.124749184s of 10.060990334s, submitted: 54
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133177344 unmapped: 29007872 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 323 ms_handle_reset con 0x5607b3302400 session 0x5607b3fa9a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 323 heartbeat osd_stat(store_statfs(0x4f72c8000/0x0/0x4ffc00000, data 0x342fab9/0x35f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133210112 unmapped: 28975104 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 323 ms_handle_reset con 0x5607b4aef800 session 0x5607b4a790e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361509 data_alloc: 234881024 data_used: 10534912
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 323 heartbeat osd_stat(store_statfs(0x4f72c8000/0x0/0x4ffc00000, data 0x342fab9/0x35f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133226496 unmapped: 28958720 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 324 ms_handle_reset con 0x5607b60b5800 session 0x5607b4f08780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 324 ms_handle_reset con 0x5607b31a9000 session 0x5607b3f734a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 28827648 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 324 heartbeat osd_stat(store_statfs(0x4f72c5000/0x0/0x4ffc00000, data 0x343151c/0x35f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 28884992 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 324 ms_handle_reset con 0x5607b3302400 session 0x5607b248c1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 324 ms_handle_reset con 0x5607b4163800 session 0x5607b33145a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 28876800 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 27820032 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2401132 data_alloc: 234881024 data_used: 10575872
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 27820032 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 324 heartbeat osd_stat(store_statfs(0x4f7157000/0x0/0x4ffc00000, data 0x38c351c/0x3767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 27820032 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 324 ms_handle_reset con 0x5607b4aef800 session 0x5607b539b4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 27820032 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 324 ms_handle_reset con 0x5607b60b7c00 session 0x5607b539be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 324 heartbeat osd_stat(store_statfs(0x4f7157000/0x0/0x4ffc00000, data 0x38c351c/0x3767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 324 ms_handle_reset con 0x5607b60b7c00 session 0x5607b5b143c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.854081154s of 10.209539413s, submitted: 68
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 324 ms_handle_reset con 0x5607b31a9000 session 0x5607b5b15a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134520832 unmapped: 27664384 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134529024 unmapped: 27656192 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2405132 data_alloc: 234881024 data_used: 10575872
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 324 handle_osd_map epochs [324,325], i have 324, src has [1,325]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 325 ms_handle_reset con 0x5607b3302400 session 0x5607b5b14f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134537216 unmapped: 27648000 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 325 ms_handle_reset con 0x5607b5dffc00 session 0x5607b2d72f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 325 ms_handle_reset con 0x5607b1ff2000 session 0x5607b422f0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 325 ms_handle_reset con 0x5607b31a9000 session 0x5607b24bba40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134545408 unmapped: 27639808 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 325 heartbeat osd_stat(store_statfs(0x4f712e000/0x0/0x4ffc00000, data 0x38e9499/0x378f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 325 ms_handle_reset con 0x5607b5dffc00 session 0x5607b4b1be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 325 ms_handle_reset con 0x5607b3302400 session 0x5607b4ac05a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 325 ms_handle_reset con 0x5607b60b7c00 session 0x5607b248c1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134545408 unmapped: 27639808 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 325 ms_handle_reset con 0x5607b3fa3000 session 0x5607b32e4f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134561792 unmapped: 27623424 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 325 ms_handle_reset con 0x5607b31a9000 session 0x5607b63f0d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134561792 unmapped: 27623424 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2409479 data_alloc: 234881024 data_used: 10608640
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 325 handle_osd_map epochs [325,326], i have 325, src has [1,326]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 326 ms_handle_reset con 0x5607b3302400 session 0x5607b3eed860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134520832 unmapped: 27664384 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 327 ms_handle_reset con 0x5607b453c400 session 0x5607b63f14a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 327 ms_handle_reset con 0x5607b5fd6000 session 0x5607b4a79e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 327 ms_handle_reset con 0x5607b60b7c00 session 0x5607b33163c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133849088 unmapped: 28336128 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 327 ms_handle_reset con 0x5607b453c000 session 0x5607b3fa9680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 327 ms_handle_reset con 0x5607b5dffc00 session 0x5607b32e5860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 327 ms_handle_reset con 0x5607b31a9000 session 0x5607b44ae960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x1abaede/0x1c85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 28712960 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 327 ms_handle_reset con 0x5607b3302400 session 0x5607b4ac10e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 327 ms_handle_reset con 0x5607b453c400 session 0x5607b48e1c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 28712960 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 28712960 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2184338 data_alloc: 218103808 data_used: 8421376
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 327 handle_osd_map epochs [327,328], i have 327, src has [1,328]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.245149612s of 11.798440933s, submitted: 101
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x1abaede/0x1c85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 328 ms_handle_reset con 0x5607b31a9000 session 0x5607b20a7e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133488640 unmapped: 28696576 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 28663808 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 329 ms_handle_reset con 0x5607b3302400 session 0x5607b4ace780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 329 ms_handle_reset con 0x5607b453c000 session 0x5607b24baf00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 28647424 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 329 ms_handle_reset con 0x5607b5dffc00 session 0x5607b4b2c780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 28647424 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 28647424 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 330 ms_handle_reset con 0x5607b5fd6000 session 0x5607b3316f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2218452 data_alloc: 234881024 data_used: 10006528
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 330 heartbeat osd_stat(store_statfs(0x4f8c2f000/0x0/0x4ffc00000, data 0x1abea27/0x1c8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 28647424 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 28647424 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 330 ms_handle_reset con 0x5607b31a9000 session 0x5607b4acf0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 330 ms_handle_reset con 0x5607b453c000 session 0x5607b3fa8000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 330 ms_handle_reset con 0x5607b3302400 session 0x5607b4470780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 330 ms_handle_reset con 0x5607b5dffc00 session 0x5607b248d680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 28639232 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 330 ms_handle_reset con 0x5607b5fd6000 session 0x5607b5b152c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 330 heartbeat osd_stat(store_statfs(0x4f8c2b000/0x0/0x4ffc00000, data 0x1ac05b4/0x1c93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 330 handle_osd_map epochs [331,331], i have 331, src has [1,331]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133554176 unmapped: 28631040 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 331 ms_handle_reset con 0x5607b31a9000 session 0x5607b4f083c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 331 ms_handle_reset con 0x5607b453c000 session 0x5607b4ace000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 331 ms_handle_reset con 0x5607b3302400 session 0x5607b4b1b2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133554176 unmapped: 28631040 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2229712 data_alloc: 234881024 data_used: 10027008
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 331 ms_handle_reset con 0x5607b5dffc00 session 0x5607b32e45a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133554176 unmapped: 28631040 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 331 ms_handle_reset con 0x5607b5fd6000 session 0x5607b32e45a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.733379364s of 11.006288528s, submitted: 99
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 331 ms_handle_reset con 0x5607b3302400 session 0x5607b44aeb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 331 heartbeat osd_stat(store_statfs(0x4f8c25000/0x0/0x4ffc00000, data 0x1ac2151/0x1c98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133218304 unmapped: 28966912 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 331 ms_handle_reset con 0x5607b453c000 session 0x5607b44af0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 332 ms_handle_reset con 0x5607b5dffc00 session 0x5607b2d73860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 332 ms_handle_reset con 0x5607b453d400 session 0x5607b539a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 28893184 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 333 ms_handle_reset con 0x5607b453d000 session 0x5607b539a3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 333 ms_handle_reset con 0x5607b31a9000 session 0x5607b4f083c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 333 ms_handle_reset con 0x5607b539e800 session 0x5607b3f73860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 333 ms_handle_reset con 0x5607b3302400 session 0x5607b248d680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 28803072 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 333 ms_handle_reset con 0x5607b453d400 session 0x5607b539b0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 334 ms_handle_reset con 0x5607b5dffc00 session 0x5607b24bbe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 334 ms_handle_reset con 0x5607b453c000 session 0x5607b4acf0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 334 ms_handle_reset con 0x5607b31a9000 session 0x5607b48e1c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133513216 unmapped: 28672000 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 334 ms_handle_reset con 0x5607b3302400 session 0x5607b4ac10e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 334 ms_handle_reset con 0x5607b453d400 session 0x5607b32e5860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2253635 data_alloc: 234881024 data_used: 10043392
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 334 ms_handle_reset con 0x5607b5dffc00 session 0x5607b20bd860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133554176 unmapped: 28631040 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 334 handle_osd_map epochs [334,335], i have 334, src has [1,335]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 335 ms_handle_reset con 0x5607b539e800 session 0x5607b3fa9680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 28590080 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 335 heartbeat osd_stat(store_statfs(0x4f8c12000/0x0/0x4ffc00000, data 0x1ac9319/0x1caa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 28573696 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 335 ms_handle_reset con 0x5607b31a9000 session 0x5607b32e4f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 335 ms_handle_reset con 0x5607b3302400 session 0x5607b4b1be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 28573696 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 335 ms_handle_reset con 0x5607b453c000 session 0x5607b4470b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 335 ms_handle_reset con 0x5607b453d400 session 0x5607b3fa8960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 28573696 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2251355 data_alloc: 234881024 data_used: 10051584
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 335 ms_handle_reset con 0x5607b31a9000 session 0x5607b422ef00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 28573696 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 335 ms_handle_reset con 0x5607b3302400 session 0x5607b423d4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133619712 unmapped: 28565504 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.058675766s of 10.640169144s, submitted: 154
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 336 ms_handle_reset con 0x5607b453c000 session 0x5607b31c9c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 336 heartbeat osd_stat(store_statfs(0x4f8c18000/0x0/0x4ffc00000, data 0x1ac91e3/0x1ca6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 336 ms_handle_reset con 0x5607b539e800 session 0x5607b20bd4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 336 heartbeat osd_stat(store_statfs(0x4f8c15000/0x0/0x4ffc00000, data 0x1acac6e/0x1ca8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 336 ms_handle_reset con 0x5607b539e400 session 0x5607b3f72f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 28524544 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 337 ms_handle_reset con 0x5607b3302400 session 0x5607b422fe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 337 ms_handle_reset con 0x5607b31a9000 session 0x5607b3f1cf00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133693440 unmapped: 28491776 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 337 handle_osd_map epochs [337,338], i have 337, src has [1,338]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 338 ms_handle_reset con 0x5607b453c000 session 0x5607b3f73680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 338 ms_handle_reset con 0x5607b539e800 session 0x5607b4a78960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 338 ms_handle_reset con 0x5607b539f400 session 0x5607b3f425a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133718016 unmapped: 28467200 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2262260 data_alloc: 234881024 data_used: 10063872
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 339 ms_handle_reset con 0x5607b31a9000 session 0x5607b4471a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 339 ms_handle_reset con 0x5607b453c000 session 0x5607b4ac0780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133799936 unmapped: 28385280 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 340 ms_handle_reset con 0x5607b3302400 session 0x5607b24bb2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 340 heartbeat osd_stat(store_statfs(0x4f8c0d000/0x0/0x4ffc00000, data 0x1ad02f8/0x1cae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 340 ms_handle_reset con 0x5607b539e800 session 0x5607b4b1ad20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 28311552 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 340 handle_osd_map epochs [340,341], i have 340, src has [1,341]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 341 ms_handle_reset con 0x5607b539e000 session 0x5607b539b680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 28278784 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 342 ms_handle_reset con 0x5607b31a9000 session 0x5607b5b141e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 342 ms_handle_reset con 0x5607b3302400 session 0x5607b32823c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133922816 unmapped: 28262400 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 342 ms_handle_reset con 0x5607b539e800 session 0x5607b4ac05a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 342 ms_handle_reset con 0x5607b453c000 session 0x5607b31c9e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 342 heartbeat osd_stat(store_statfs(0x4f8c09000/0x0/0x4ffc00000, data 0x1ad5192/0x1cb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133922816 unmapped: 28262400 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2275744 data_alloc: 234881024 data_used: 10702848
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 342 ms_handle_reset con 0x5607b539f000 session 0x5607b248b4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133922816 unmapped: 28262400 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 342 handle_osd_map epochs [342,343], i have 342, src has [1,343]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134045696 unmapped: 28139520 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 343 heartbeat osd_stat(store_statfs(0x4f8c01000/0x0/0x4ffc00000, data 0x1d0b1a2/0x1cbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 343 ms_handle_reset con 0x5607b31a9000 session 0x5607b248b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.857351303s of 10.155800819s, submitted: 125
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 343 ms_handle_reset con 0x5607b3302400 session 0x5607b539ad20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 343 ms_handle_reset con 0x5607b453c000 session 0x5607b4f09a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134062080 unmapped: 28123136 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 343 ms_handle_reset con 0x5607b539e800 session 0x5607b33150e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 343 heartbeat osd_stat(store_statfs(0x4f8bfb000/0x0/0x4ffc00000, data 0x1d0cc89/0x1cc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 343 ms_handle_reset con 0x5607b60b5c00 session 0x5607b4a78b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134062080 unmapped: 28123136 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134062080 unmapped: 28123136 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2307462 data_alloc: 234881024 data_used: 10715136
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 343 ms_handle_reset con 0x5607b31a9000 session 0x5607b44ae000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 343 ms_handle_reset con 0x5607b3302400 session 0x5607b4b1a5a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134062080 unmapped: 28123136 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 343 ms_handle_reset con 0x5607b453c000 session 0x5607b20a7680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 344 ms_handle_reset con 0x5607b60b4000 session 0x5607b3f72f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 344 ms_handle_reset con 0x5607b539e800 session 0x5607b423cb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 134103040 unmapped: 28082176 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 344 heartbeat osd_stat(store_statfs(0x4f8bf8000/0x0/0x4ffc00000, data 0x1d0e82e/0x1cc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 344 handle_osd_map epochs [344,345], i have 344, src has [1,345]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 345 handle_osd_map epochs [345,345], i have 345, src has [1,345]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 345 ms_handle_reset con 0x5607b60b5c00 session 0x5607b4f09c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 345 ms_handle_reset con 0x5607b31a9000 session 0x5607b3fa92c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 345 ms_handle_reset con 0x5607b3302400 session 0x5607b44ae5a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 27017216 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 345 ms_handle_reset con 0x5607b453c000 session 0x5607b20bc5a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 26976256 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 345 ms_handle_reset con 0x5607b5dff000 session 0x5607b3314b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 345 ms_handle_reset con 0x5607b60b4000 session 0x5607b248ba40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 345 ms_handle_reset con 0x5607b31a9000 session 0x5607b24ba3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 346 ms_handle_reset con 0x5607b453c000 session 0x5607b20a6780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 346 ms_handle_reset con 0x5607b60b7800 session 0x5607b423c3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 26951680 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 346 ms_handle_reset con 0x5607b54ff400 session 0x5607b422ed20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2319594 data_alloc: 234881024 data_used: 10731520
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 346 ms_handle_reset con 0x5607b60b5c00 session 0x5607b48e1c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 347 heartbeat osd_stat(store_statfs(0x4f8bf3000/0x0/0x4ffc00000, data 0x1d11f7d/0x1cca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 26943488 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 347 ms_handle_reset con 0x5607b453c000 session 0x5607b4acf0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 348 ms_handle_reset con 0x5607b31a9000 session 0x5607b24ca1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 348 heartbeat osd_stat(store_statfs(0x4f8bef000/0x0/0x4ffc00000, data 0x1d13afa/0x1ccd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135282688 unmapped: 26902528 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135282688 unmapped: 26902528 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.910697937s of 11.127422333s, submitted: 52
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 348 ms_handle_reset con 0x5607b60b7800 session 0x5607b539b0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135282688 unmapped: 26902528 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 349 ms_handle_reset con 0x5607b5ea6800 session 0x5607b3f73860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 349 ms_handle_reset con 0x5607b5ea6800 session 0x5607b3314b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135307264 unmapped: 26877952 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2334121 data_alloc: 234881024 data_used: 10747904
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 350 ms_handle_reset con 0x5607b5ea7800 session 0x5607b248cb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 350 ms_handle_reset con 0x5607b60b4000 session 0x5607b44afa40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 350 heartbeat osd_stat(store_statfs(0x4f8be7000/0x0/0x4ffc00000, data 0x1d172e6/0x1cd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135331840 unmapped: 26853376 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 351 ms_handle_reset con 0x5607b31a9000 session 0x5607b4f085a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 351 heartbeat osd_stat(store_statfs(0x4f8be2000/0x0/0x4ffc00000, data 0x1d18ee1/0x1cda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135364608 unmapped: 26820608 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 351 ms_handle_reset con 0x5607b60b7800 session 0x5607b32e4f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 351 handle_osd_map epochs [351,352], i have 351, src has [1,352]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 352 ms_handle_reset con 0x5607b60b5c00 session 0x5607b63f1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 352 heartbeat osd_stat(store_statfs(0x4f8bdd000/0x0/0x4ffc00000, data 0x1d1aa96/0x1cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 352 ms_handle_reset con 0x5607b453c000 session 0x5607b44ae5a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 352 ms_handle_reset con 0x5607b31a9000 session 0x5607b4aceb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135389184 unmapped: 26796032 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135389184 unmapped: 26796032 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 352 ms_handle_reset con 0x5607b5ea6800 session 0x5607b48e1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 352 ms_handle_reset con 0x5607b5ea7800 session 0x5607b4b1bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135397376 unmapped: 26787840 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2346727 data_alloc: 234881024 data_used: 10752000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 352 handle_osd_map epochs [352,353], i have 352, src has [1,353]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 353 ms_handle_reset con 0x5607b31a9000 session 0x5607b4b2c000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135413760 unmapped: 26771456 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 353 ms_handle_reset con 0x5607b453c000 session 0x5607b4b2c1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 354 ms_handle_reset con 0x5607b60b5c00 session 0x5607b44ae000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 26599424 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 354 heartbeat osd_stat(store_statfs(0x4f8b85000/0x0/0x4ffc00000, data 0x1d6ff51/0x1d37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 355 ms_handle_reset con 0x5607b5ea7400 session 0x5607b4acfc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 355 ms_handle_reset con 0x5607b5ea6000 session 0x5607b32e5680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135667712 unmapped: 26517504 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 355 handle_osd_map epochs [355,356], i have 355, src has [1,356]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.747308731s of 10.052449226s, submitted: 102
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 356 ms_handle_reset con 0x5607b31a9000 session 0x5607b422fa40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 356 ms_handle_reset con 0x5607b60b4000 session 0x5607b33150e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 356 ms_handle_reset con 0x5607b5ea6800 session 0x5607b44af2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 356 heartbeat osd_stat(store_statfs(0x4f8b81000/0x0/0x4ffc00000, data 0x1d71a6c/0x1d39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135716864 unmapped: 26468352 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 356 heartbeat osd_stat(store_statfs(0x4f8b81000/0x0/0x4ffc00000, data 0x1d7378f/0x1d3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 356 ms_handle_reset con 0x5607b453c000 session 0x5607b24cbe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135716864 unmapped: 26468352 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2367007 data_alloc: 234881024 data_used: 10891264
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 356 ms_handle_reset con 0x5607b60b5c00 session 0x5607b31c9e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 357 ms_handle_reset con 0x5607b5ea7400 session 0x5607b3315680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 357 ms_handle_reset con 0x5607b453c000 session 0x5607b4ef2780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 26435584 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 357 ms_handle_reset con 0x5607b31a9000 session 0x5607b32823c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135757824 unmapped: 26427392 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 358 ms_handle_reset con 0x5607b5ea6800 session 0x5607b4ef21e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135757824 unmapped: 26427392 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 358 ms_handle_reset con 0x5607b60b4000 session 0x5607b4ace000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 358 ms_handle_reset con 0x5607b31a9000 session 0x5607b3316f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 26419200 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 359 heartbeat osd_stat(store_statfs(0x4f8b7e000/0x0/0x4ffc00000, data 0x1d76e2d/0x1d3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 359 ms_handle_reset con 0x5607b453c000 session 0x5607b4acef00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135798784 unmapped: 26386432 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2377284 data_alloc: 234881024 data_used: 10895360
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 359 handle_osd_map epochs [360,360], i have 360, src has [1,360]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 360 ms_handle_reset con 0x5607b5ea6800 session 0x5607b5b141e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 26361856 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135839744 unmapped: 26345472 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135872512 unmapped: 26312704 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135872512 unmapped: 26312704 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 360 handle_osd_map epochs [361,362], i have 360, src has [1,362]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.578303337s of 10.857448578s, submitted: 72
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 362 ms_handle_reset con 0x5607b5ea7400 session 0x5607b3283680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 362 heartbeat osd_stat(store_statfs(0x4f8b6e000/0x0/0x4ffc00000, data 0x1d7dc45/0x1d4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 26271744 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2390955 data_alloc: 234881024 data_used: 10895360
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 362 ms_handle_reset con 0x5607b60b4000 session 0x5607b3283e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 26271744 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 362 ms_handle_reset con 0x5607b5ea6800 session 0x5607b3283680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 362 ms_handle_reset con 0x5607b453c000 session 0x5607b63f1860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 362 ms_handle_reset con 0x5607b5ea7400 session 0x5607b539bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 362 handle_osd_map epochs [362,363], i have 362, src has [1,363]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 362 handle_osd_map epochs [363,363], i have 363, src has [1,363]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 363 ms_handle_reset con 0x5607b4af0c00 session 0x5607b422fa40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 363 ms_handle_reset con 0x5607b4af0400 session 0x5607b319e780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 363 ms_handle_reset con 0x5607b453c000 session 0x5607b4b2c1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 363 ms_handle_reset con 0x5607b31a9000 session 0x5607b3283860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136044544 unmapped: 26140672 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 364 ms_handle_reset con 0x5607b4af0400 session 0x5607b4b2cb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136085504 unmapped: 26099712 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 364 ms_handle_reset con 0x5607b5ea6800 session 0x5607b248bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 365 ms_handle_reset con 0x5607b4af0c00 session 0x5607b1c9e1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136192000 unmapped: 25993216 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 365 ms_handle_reset con 0x5607b31a9000 session 0x5607b2d73c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136192000 unmapped: 25993216 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2403669 data_alloc: 234881024 data_used: 10924032
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 365 ms_handle_reset con 0x5607b4af0c00 session 0x5607b20bc960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 365 handle_osd_map epochs [366,366], i have 366, src has [1,366]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 366 heartbeat osd_stat(store_statfs(0x4f8b64000/0x0/0x4ffc00000, data 0x1d82fba/0x1d59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 366 ms_handle_reset con 0x5607b5ea6800 session 0x5607b31c9e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136380416 unmapped: 25804800 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 366 ms_handle_reset con 0x5607b453c000 session 0x5607b4b2d4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 367 ms_handle_reset con 0x5607b5ea7400 session 0x5607b423cd20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 367 ms_handle_reset con 0x5607b31a9000 session 0x5607b3282960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 367 ms_handle_reset con 0x5607b4af0400 session 0x5607b4b1be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 25788416 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 367 handle_osd_map epochs [368,368], i have 368, src has [1,368]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 368 ms_handle_reset con 0x5607b4af0c00 session 0x5607b248a000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136445952 unmapped: 25739264 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 368 ms_handle_reset con 0x5607b453c000 session 0x5607b3f1da40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 368 ms_handle_reset con 0x5607b5ea6800 session 0x5607b5b14960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 368 heartbeat osd_stat(store_statfs(0x4f8b5e000/0x0/0x4ffc00000, data 0x1d86d37/0x1d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 25673728 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.144461632s of 10.462265015s, submitted: 207
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136519680 unmapped: 25665536 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 369 ms_handle_reset con 0x5607b453c000 session 0x5607b423d4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2417286 data_alloc: 234881024 data_used: 10932224
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 369 ms_handle_reset con 0x5607b31a9000 session 0x5607b4ac05a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 25657344 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136544256 unmapped: 25640960 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 370 ms_handle_reset con 0x5607b4af0c00 session 0x5607b248cd20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 370 ms_handle_reset con 0x5607b3302400 session 0x5607b4b2c3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 371 ms_handle_reset con 0x5607b60b7400 session 0x5607b3f5a780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 25632768 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 371 handle_osd_map epochs [372,372], i have 372, src has [1,372]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 372 ms_handle_reset con 0x5607b453c000 session 0x5607b24bba40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 372 heartbeat osd_stat(store_statfs(0x4f8b4f000/0x0/0x4ffc00000, data 0x1d8fb1a/0x1d6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 25583616 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 372 ms_handle_reset con 0x5607b31a9000 session 0x5607b214dc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 373 ms_handle_reset con 0x5607b3302400 session 0x5607b825ab40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 373 ms_handle_reset con 0x5607b4af0400 session 0x5607b20a65a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136626176 unmapped: 25559040 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2432988 data_alloc: 234881024 data_used: 10940416
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136626176 unmapped: 25559040 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136642560 unmapped: 25542656 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 374 ms_handle_reset con 0x5607b60b7400 session 0x5607b3315c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 374 heartbeat osd_stat(store_statfs(0x4f8b4b000/0x0/0x4ffc00000, data 0x1d931a7/0x1d72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 375 ms_handle_reset con 0x5607b4af0c00 session 0x5607b423da40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136675328 unmapped: 25509888 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 25468928 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 375 ms_handle_reset con 0x5607b31a9000 session 0x5607b422f4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 375 ms_handle_reset con 0x5607b3302400 session 0x5607b422fc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 24420352 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2428320 data_alloc: 234881024 data_used: 10825728
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 24420352 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 375 ms_handle_reset con 0x5607b453c000 session 0x5607b423da40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.748927593s of 11.741880417s, submitted: 202
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 376 ms_handle_reset con 0x5607b31a9c00 session 0x5607b48e0f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 24395776 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 376 heartbeat osd_stat(store_statfs(0x4f8b97000/0x0/0x4ffc00000, data 0x1d46717/0x1d26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 377 heartbeat osd_stat(store_statfs(0x4f8b97000/0x0/0x4ffc00000, data 0x1d46717/0x1d26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 377 ms_handle_reset con 0x5607b31a8400 session 0x5607b825ab40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137846784 unmapped: 24338432 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 377 ms_handle_reset con 0x5607b31a9000 session 0x5607b319e3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137846784 unmapped: 24338432 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 377 ms_handle_reset con 0x5607b4163800 session 0x5607b5b150e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 377 ms_handle_reset con 0x5607b4aef800 session 0x5607b5b15860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137846784 unmapped: 24338432 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2429338 data_alloc: 234881024 data_used: 10833920
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 378 ms_handle_reset con 0x5607b453c000 session 0x5607b319f860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 378 ms_handle_reset con 0x5607b4af0400 session 0x5607b32852c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137854976 unmapped: 24330240 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 380 ms_handle_reset con 0x5607b4af0c00 session 0x5607b319ef00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 380 ms_handle_reset con 0x5607b31a8400 session 0x5607b825af00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 380 ms_handle_reset con 0x5607b3302400 session 0x5607b24ca000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 380 ms_handle_reset con 0x5607b31a9c00 session 0x5607b214dc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 24289280 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 380 heartbeat osd_stat(store_statfs(0x4f8bb0000/0x0/0x4ffc00000, data 0x1af8b88/0x1d0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 380 heartbeat osd_stat(store_statfs(0x4f8bb0000/0x0/0x4ffc00000, data 0x1af8b88/0x1d0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 24289280 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 24289280 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 24256512 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2435035 data_alloc: 234881024 data_used: 10723328
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 24256512 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 382 ms_handle_reset con 0x5607b3302400 session 0x5607b5b14960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 24256512 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 382 ms_handle_reset con 0x5607b31a9000 session 0x5607b44af860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.342626572s of 10.773828506s, submitted: 112
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 382 ms_handle_reset con 0x5607b4af0400 session 0x5607b44ae960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137936896 unmapped: 24248320 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 382 heartbeat osd_stat(store_statfs(0x4f8bad000/0x0/0x4ffc00000, data 0x1afbe35/0x1d0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137936896 unmapped: 24248320 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 382 ms_handle_reset con 0x5607b31a8400 session 0x5607b4ac1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 382 ms_handle_reset con 0x5607b4aef800 session 0x5607b4a79860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137953280 unmapped: 24231936 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2411145 data_alloc: 234881024 data_used: 9998336
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132751360 unmapped: 29433856 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 384 ms_handle_reset con 0x5607b4163800 session 0x5607b248a000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132759552 unmapped: 29425664 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f9a23000/0x0/0x4ffc00000, data 0xc85fc0/0xe9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 384 ms_handle_reset con 0x5607b4af0c00 session 0x5607b32823c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 385 ms_handle_reset con 0x5607b31a8400 session 0x5607b4470b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132767744 unmapped: 29417472 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 385 ms_handle_reset con 0x5607b31a9000 session 0x5607b4a79680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132759552 unmapped: 29425664 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132759552 unmapped: 29425664 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f9a1f000/0x0/0x4ffc00000, data 0xc87bad/0xe9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2255733 data_alloc: 218103808 data_used: 1298432
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f9a1f000/0x0/0x4ffc00000, data 0xc87bad/0xe9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 385 handle_osd_map epochs [386,386], i have 386, src has [1,386]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132759552 unmapped: 29425664 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 386 heartbeat osd_stat(store_statfs(0x4f9a1b000/0x0/0x4ffc00000, data 0xc897aa/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133808128 unmapped: 28377088 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133808128 unmapped: 28377088 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 387 ms_handle_reset con 0x5607b3302400 session 0x5607b3315860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133808128 unmapped: 28377088 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133808128 unmapped: 28377088 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2261841 data_alloc: 218103808 data_used: 1302528
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 387 heartbeat osd_stat(store_statfs(0x4f9609000/0x0/0x4ffc00000, data 0xc8b261/0xea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133808128 unmapped: 28377088 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.441797256s of 14.969786644s, submitted: 101
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133808128 unmapped: 28377088 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133775360 unmapped: 28409856 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133775360 unmapped: 28409856 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 388 ms_handle_reset con 0x5607b31a8400 session 0x5607b3315680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133775360 unmapped: 28409856 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 388 ms_handle_reset con 0x5607b31a9000 session 0x5607b3f42b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2266759 data_alloc: 218103808 data_used: 1302528
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 388 heartbeat osd_stat(store_statfs(0x4f9607000/0x0/0x4ffc00000, data 0xc8ccc4/0xea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133775360 unmapped: 28409856 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 389 ms_handle_reset con 0x5607b4163800 session 0x5607b31c8b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133791744 unmapped: 28393472 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 390 ms_handle_reset con 0x5607b4af0c00 session 0x5607b825a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133799936 unmapped: 28385280 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 390 ms_handle_reset con 0x5607b4af0400 session 0x5607b44ab2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133799936 unmapped: 28385280 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 390 ms_handle_reset con 0x5607b31a9000 session 0x5607b4ef32c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 391 ms_handle_reset con 0x5607b31a8400 session 0x5607b44aad20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 391 ms_handle_reset con 0x5607b4163800 session 0x5607b4ac03c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 391 ms_handle_reset con 0x5607b4af0c00 session 0x5607b3eeda40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132759552 unmapped: 29425664 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2283981 data_alloc: 218103808 data_used: 1314816
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 391 ms_handle_reset con 0x5607b1cbc400 session 0x5607b3fa85a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 391 ms_handle_reset con 0x5607b1cbc400 session 0x5607b3fa90e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132759552 unmapped: 29425664 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 391 heartbeat osd_stat(store_statfs(0x4f95f9000/0x0/0x4ffc00000, data 0xc920c5/0xeb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 391 ms_handle_reset con 0x5607b31a8400 session 0x5607b3fa9e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 391 ms_handle_reset con 0x5607b31a9000 session 0x5607b44ae780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 391 heartbeat osd_stat(store_statfs(0x4f95f9000/0x0/0x4ffc00000, data 0xc92063/0xeb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.415328979s of 10.004321098s, submitted: 57
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132759552 unmapped: 29425664 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 391 ms_handle_reset con 0x5607b4163800 session 0x5607b44aed20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 391 ms_handle_reset con 0x5607b4af0c00 session 0x5607b63f0000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132759552 unmapped: 29425664 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 391 heartbeat osd_stat(store_statfs(0x4f95fe000/0x0/0x4ffc00000, data 0xc91ff1/0xeb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132767744 unmapped: 29417472 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 29409280 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2278020 data_alloc: 218103808 data_used: 1318912
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f95fa000/0x0/0x4ffc00000, data 0xc93b8a/0xeb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132800512 unmapped: 29384704 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 392 ms_handle_reset con 0x5607b1cbc400 session 0x5607b63f0d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 29376512 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 394 ms_handle_reset con 0x5607b31a8400 session 0x5607b63f14a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132800512 unmapped: 29384704 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 394 ms_handle_reset con 0x5607b4163800 session 0x5607b4ac10e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 395 handle_osd_map epochs [395,395], i have 395, src has [1,395]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 395 ms_handle_reset con 0x5607b31a9000 session 0x5607b5b14f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 395 ms_handle_reset con 0x5607b1cbc000 session 0x5607b3eeda40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132833280 unmapped: 29351936 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132833280 unmapped: 29351936 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 395 ms_handle_reset con 0x5607b1cbc400 session 0x5607b44aad20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2292453 data_alloc: 218103808 data_used: 1327104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 395 ms_handle_reset con 0x5607b31a8400 session 0x5607b44ab2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 396 ms_handle_reset con 0x5607b31a9000 session 0x5607b63f03c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 29327360 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f95ec000/0x0/0x4ffc00000, data 0xc9a983/0xec0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 397 ms_handle_reset con 0x5607b4163800 session 0x5607b4ac1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132874240 unmapped: 29310976 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.902542114s of 10.187039375s, submitted: 78
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 398 ms_handle_reset con 0x5607b422bc00 session 0x5607b44af860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132882432 unmapped: 29302784 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 398 ms_handle_reset con 0x5607b1cbc400 session 0x5607b214dc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 398 ms_handle_reset con 0x5607b31a8400 session 0x5607b24ca000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132882432 unmapped: 29302784 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132882432 unmapped: 29302784 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2301581 data_alloc: 218103808 data_used: 1335296
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132882432 unmapped: 29302784 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 399 ms_handle_reset con 0x5607b31a9000 session 0x5607b825af00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132890624 unmapped: 29294592 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f95e5000/0x0/0x4ffc00000, data 0xc9fba9/0xec8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132890624 unmapped: 29294592 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f95e5000/0x0/0x4ffc00000, data 0xc9fba9/0xec8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 399 ms_handle_reset con 0x5607b4163800 session 0x5607b319ef00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 400 ms_handle_reset con 0x5607b579d400 session 0x5607b32852c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132898816 unmapped: 29286400 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132907008 unmapped: 29278208 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2307017 data_alloc: 218103808 data_used: 1339392
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132907008 unmapped: 29278208 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 400 ms_handle_reset con 0x5607b1cbc400 session 0x5607b5b15860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132907008 unmapped: 29278208 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 400 heartbeat osd_stat(store_statfs(0x4f95e3000/0x0/0x4ffc00000, data 0xca1718/0xeca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 400 ms_handle_reset con 0x5607b31a8400 session 0x5607b5b150e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132907008 unmapped: 29278208 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.120571136s of 11.411588669s, submitted: 38
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132898816 unmapped: 29286400 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 400 ms_handle_reset con 0x5607b4163800 session 0x5607b248a1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 400 ms_handle_reset con 0x5607b422cc00 session 0x5607b4b2d4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 400 ms_handle_reset con 0x5607b6206800 session 0x5607b423da40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132898816 unmapped: 29286400 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2313335 data_alloc: 218103808 data_used: 1339392
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 401 ms_handle_reset con 0x5607b31a8400 session 0x5607b31c9e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 401 ms_handle_reset con 0x5607b422cc00 session 0x5607b4b1b2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132923392 unmapped: 29261824 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 402 ms_handle_reset con 0x5607b4163800 session 0x5607b20a6f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 402 ms_handle_reset con 0x5607b1cbc400 session 0x5607b248b4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 402 ms_handle_reset con 0x5607b31a9000 session 0x5607b825ab40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 403 ms_handle_reset con 0x5607b1cbc400 session 0x5607b3316d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132947968 unmapped: 29237248 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f95d9000/0x0/0x4ffc00000, data 0xca50d2/0xed4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 403 ms_handle_reset con 0x5607b31a8400 session 0x5607b4ace5a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 403 ms_handle_reset con 0x5607b6206800 session 0x5607b4ac1e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 403 ms_handle_reset con 0x5607b422cc00 session 0x5607b4b1ab40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 29212672 heap: 162185216 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 403 ms_handle_reset con 0x5607b53ff400 session 0x5607b3282f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133103616 unmapped: 45891584 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 404 ms_handle_reset con 0x5607b31a9000 session 0x5607b24cb4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 404 ms_handle_reset con 0x5607b1cbc400 session 0x5607b44aa1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 141574144 unmapped: 37421056 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2844858 data_alloc: 218103808 data_used: 1363968
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 145833984 unmapped: 33161216 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 404 ms_handle_reset con 0x5607b31a8400 session 0x5607b4ac0000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 37330944 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 41500672 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 404 heartbeat osd_stat(store_statfs(0x4efdd5000/0x0/0x4ffc00000, data 0xa4a86a4/0xa6d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.474607468s of 10.022521019s, submitted: 118
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 45654016 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137568256 unmapped: 41426944 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4021726 data_alloc: 218103808 data_used: 1363968
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 404 heartbeat osd_stat(store_statfs(0x4eb1d5000/0x0/0x4ffc00000, data 0xf0a86a4/0xf2d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 45604864 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 404 heartbeat osd_stat(store_statfs(0x4e9dd5000/0x0/0x4ffc00000, data 0x104a86a4/0x106d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 405 ms_handle_reset con 0x5607b4f28c00 session 0x5607b4b1a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 405 ms_handle_reset con 0x5607b4163800 session 0x5607b3f42d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 45588480 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 406 ms_handle_reset con 0x5607b1cbc400 session 0x5607b24bbe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 45539328 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 45539328 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 45539328 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 406 heartbeat osd_stat(store_statfs(0x4e89d0000/0x0/0x4ffc00000, data 0x118abc04/0x11adc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4212895 data_alloc: 218103808 data_used: 1363968
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 45539328 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 406 ms_handle_reset con 0x5607b31a8400 session 0x5607b248ba40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 45539328 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 406 handle_osd_map epochs [408,408], i have 406, src has [1,408]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 406 handle_osd_map epochs [407,408], i have 406, src has [1,408]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 408 ms_handle_reset con 0x5607b4f28c00 session 0x5607b5b154a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133505024 unmapped: 45490176 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.809759140s of 10.022680283s, submitted: 70
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 409 ms_handle_reset con 0x5607b31a9000 session 0x5607b319e3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 409 ms_handle_reset con 0x5607b422cc00 session 0x5607b248bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 45473792 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 45473792 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4226146 data_alloc: 218103808 data_used: 1376256
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 409 ms_handle_reset con 0x5607b1cbc400 session 0x5607b3eed860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 409 heartbeat osd_stat(store_statfs(0x4e89c6000/0x0/0x4ffc00000, data 0x118b1252/0x11ae6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 409 heartbeat osd_stat(store_statfs(0x4e89c6000/0x0/0x4ffc00000, data 0x118b1252/0x11ae6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 45473792 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 410 ms_handle_reset con 0x5607b31a9000 session 0x5607b423cd20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 45416448 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 410 heartbeat osd_stat(store_statfs(0x4e89c4000/0x0/0x4ffc00000, data 0x118b2dcf/0x11ae9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 411 ms_handle_reset con 0x5607b31a8400 session 0x5607b825a1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133586944 unmapped: 45408256 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 411 ms_handle_reset con 0x5607b53ff400 session 0x5607b4f09860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 411 ms_handle_reset con 0x5607b4f28c00 session 0x5607b3fa8f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 133578752 unmapped: 45416448 heap: 178995200 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171614208 unmapped: 28385280 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4596662 data_alloc: 218103808 data_used: 1384448
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 146694144 unmapped: 53305344 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 142647296 unmapped: 57352192 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 411 heartbeat osd_stat(store_statfs(0x4e21c1000/0x0/0x4ffc00000, data 0x180b49ae/0x182ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 142983168 unmapped: 57016320 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 411 heartbeat osd_stat(store_statfs(0x4e01c1000/0x0/0x4ffc00000, data 0x1a0b49ae/0x1a2ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [0,0,0,0,0,1,1,1,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.945171356s of 10.020330429s, submitted: 81
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 60866560 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 411 heartbeat osd_stat(store_statfs(0x4dcdc1000/0x0/0x4ffc00000, data 0x1d4b49ae/0x1d6ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 56360960 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5915318 data_alloc: 218103808 data_used: 1388544
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 411 heartbeat osd_stat(store_statfs(0x4da1c1000/0x0/0x4ffc00000, data 0x200b49ae/0x202ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [0,0,0,0,0,0,1,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 56221696 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 139853824 unmapped: 60145664 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 140066816 unmapped: 59932672 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 59858944 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 59613184 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6700662 data_alloc: 218103808 data_used: 1388544
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 411 ms_handle_reset con 0x5607b1cbc400 session 0x5607b4b1be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 411 ms_handle_reset con 0x5607b31a8400 session 0x5607b4b1a000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 411 heartbeat osd_stat(store_statfs(0x4d25c1000/0x0/0x4ffc00000, data 0x27cb49ae/0x27eed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 63684608 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 412 ms_handle_reset con 0x5607b31a9000 session 0x5607b423d4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136355840 unmapped: 63643648 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 413 ms_handle_reset con 0x5607b53ff400 session 0x5607b33154a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 413 ms_handle_reset con 0x5607b6206800 session 0x5607b44af860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 63586304 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 413 heartbeat osd_stat(store_statfs(0x4d25bb000/0x0/0x4ffc00000, data 0x27cb80ee/0x27ef2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 63586304 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 63586304 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6734154 data_alloc: 218103808 data_used: 1404928
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 63586304 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 413 heartbeat osd_stat(store_statfs(0x4d25bc000/0x0/0x4ffc00000, data 0x27cb7bfd/0x27ef1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.813751221s of 13.549509048s, submitted: 98
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 414 ms_handle_reset con 0x5607b1cbc400 session 0x5607b4ac1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136437760 unmapped: 63561728 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136437760 unmapped: 63561728 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 136437760 unmapped: 63561728 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 414 ms_handle_reset con 0x5607b31a8400 session 0x5607b63f03c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 414 ms_handle_reset con 0x5607b31a9000 session 0x5607b44ab2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 414 ms_handle_reset con 0x5607b53ff400 session 0x5607b44aad20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 414 ms_handle_reset con 0x5607b6206800 session 0x5607b44ae780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 62595072 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 414 heartbeat osd_stat(store_statfs(0x4d21ec000/0x0/0x4ffc00000, data 0x28086670/0x282c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6783161 data_alloc: 218103808 data_used: 1404928
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 62595072 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 414 heartbeat osd_stat(store_statfs(0x4d21ec000/0x0/0x4ffc00000, data 0x28086670/0x282c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 62595072 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 414 ms_handle_reset con 0x5607b31a8400 session 0x5607b5b14d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 62496768 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 415 ms_handle_reset con 0x5607b31a9000 session 0x5607b3f5a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b2dfe800 session 0x5607b319ef00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b53ff400 session 0x5607b4b1b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b5d24800 session 0x5607b422f2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b5d24800 session 0x5607b2d73c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b31a8400 session 0x5607b4a78d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b2dfe800 session 0x5607b422f680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b31a9000 session 0x5607b20bd860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b1cbc400 session 0x5607b3315860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 139059200 unmapped: 60940288 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 139067392 unmapped: 60932096 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6941640 data_alloc: 218103808 data_used: 1417216
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 139067392 unmapped: 60932096 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 139067392 unmapped: 60932096 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 heartbeat osd_stat(store_statfs(0x4d106a000/0x0/0x4ffc00000, data 0x2920146e/0x29442000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 139075584 unmapped: 60923904 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.999394417s of 11.456064224s, submitted: 112
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b2dfe800 session 0x5607b248b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 139345920 unmapped: 60653568 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b1cbc400 session 0x5607b422eb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 139362304 unmapped: 60637184 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6942565 data_alloc: 218103808 data_used: 1421312
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 141852672 unmapped: 58146816 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 heartbeat osd_stat(store_statfs(0x4d102c000/0x0/0x4ffc00000, data 0x2924146e/0x29482000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 54747136 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 54738944 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 54738944 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b5d24800 session 0x5607b4ac05a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b53ff400 session 0x5607b44afc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b6208000 session 0x5607b422fc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b1cbc400 session 0x5607b31c8b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b2dfe800 session 0x5607b825a5a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 heartbeat osd_stat(store_statfs(0x4d102c000/0x0/0x4ffc00000, data 0x2924146e/0x29482000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 145653760 unmapped: 54345728 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7047864 data_alloc: 234881024 data_used: 13389824
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 heartbeat osd_stat(store_statfs(0x4d0f0f000/0x0/0x4ffc00000, data 0x2935e46e/0x2959f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 145653760 unmapped: 54345728 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b53ff400 session 0x5607b423de00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b5d24800 session 0x5607b422f4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 145653760 unmapped: 54345728 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b60b7800 session 0x5607b2449c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b1cbc400 session 0x5607b423c780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 54329344 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.551311493s of 10.682231903s, submitted: 37
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 54329344 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 54321152 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7053020 data_alloc: 234881024 data_used: 13389824
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 146382848 unmapped: 53616640 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 heartbeat osd_stat(store_statfs(0x4d0f0d000/0x0/0x4ffc00000, data 0x2935e4a0/0x295a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b2dcd800 session 0x5607b4470d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 156123136 unmapped: 43876352 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b74d9c00 session 0x5607b44710e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b74d9000 session 0x5607b4ef2d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 156327936 unmapped: 43671552 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 heartbeat osd_stat(store_statfs(0x4cd207000/0x0/0x4ffc00000, data 0x2beaa4a0/0x2c0ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 156450816 unmapped: 43548672 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 156450816 unmapped: 43548672 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7405175 data_alloc: 234881024 data_used: 16470016
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 156459008 unmapped: 43540480 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 heartbeat osd_stat(store_statfs(0x4cd207000/0x0/0x4ffc00000, data 0x2beaa4a0/0x2c0ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 156459008 unmapped: 43540480 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 156467200 unmapped: 43532288 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 heartbeat osd_stat(store_statfs(0x4cd207000/0x0/0x4ffc00000, data 0x2beaa4a0/0x2c0ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.160986900s of 10.006623268s, submitted: 167
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 156491776 unmapped: 43507712 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 156508160 unmapped: 43491328 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7405191 data_alloc: 234881024 data_used: 16470016
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 156508160 unmapped: 43491328 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 heartbeat osd_stat(store_statfs(0x4cd207000/0x0/0x4ffc00000, data 0x2beaa4a0/0x2c0ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 157663232 unmapped: 42336256 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 160260096 unmapped: 39739392 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b4f29400 session 0x5607b1c9e1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 161136640 unmapped: 38862848 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 heartbeat osd_stat(store_statfs(0x4cc55a000/0x0/0x4ffc00000, data 0x2cf114c3/0x2cda7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 161161216 unmapped: 38838272 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7563821 data_alloc: 234881024 data_used: 18591744
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 161161216 unmapped: 38838272 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 161185792 unmapped: 38813696 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 161243136 unmapped: 38756352 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165371904 unmapped: 34627584 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 heartbeat osd_stat(store_statfs(0x4cc544000/0x0/0x4ffc00000, data 0x2cf334c3/0x2cdc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 34594816 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7600885 data_alloc: 234881024 data_used: 24629248
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 34594816 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 34594816 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 34594816 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 heartbeat osd_stat(store_statfs(0x4cc544000/0x0/0x4ffc00000, data 0x2cf334c3/0x2cdc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 34594816 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 34594816 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7601205 data_alloc: 234881024 data_used: 24637440
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.926595688s of 16.421915054s, submitted: 159
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 ms_handle_reset con 0x5607b74d9000 session 0x5607b422fe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 heartbeat osd_stat(store_statfs(0x4cc544000/0x0/0x4ffc00000, data 0x2cf334c3/0x2cdc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 34562048 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165437440 unmapped: 34562048 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165445632 unmapped: 34553856 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165462016 unmapped: 34537472 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 heartbeat osd_stat(store_statfs(0x4cc544000/0x0/0x4ffc00000, data 0x2cf334c3/0x2cdc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165494784 unmapped: 34504704 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7602459 data_alloc: 234881024 data_used: 24637440
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165494784 unmapped: 34504704 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175734784 unmapped: 24264704 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 417 ms_handle_reset con 0x5607b3300c00 session 0x5607b4acfc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 176095232 unmapped: 23904256 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 176201728 unmapped: 23797760 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 417 ms_handle_reset con 0x5607b2dfe800 session 0x5607b20a7c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 417 ms_handle_reset con 0x5607b53ff400 session 0x5607b63f0d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 417 ms_handle_reset con 0x5607b5d24800 session 0x5607b41510e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 417 heartbeat osd_stat(store_statfs(0x4cc42f000/0x0/0x4ffc00000, data 0x2ce1e040/0x2ccb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 417 handle_osd_map epochs [418,418], i have 418, src has [1,418]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 418 ms_handle_reset con 0x5607b2dfe800 session 0x5607b44af0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 418 ms_handle_reset con 0x5607b5d24800 session 0x5607b33154a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 178806784 unmapped: 21192704 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7645701 data_alloc: 234881024 data_used: 23621632
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.321242332s of 10.425465584s, submitted: 173
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174546944 unmapped: 25452544 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 419 ms_handle_reset con 0x5607b3300c00 session 0x5607b4ef34a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174587904 unmapped: 25411584 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 419 heartbeat osd_stat(store_statfs(0x4cbe58000/0x0/0x4ffc00000, data 0x2d61fb8b/0x2d4b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174587904 unmapped: 25411584 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 419 ms_handle_reset con 0x5607b74d9c00 session 0x5607b248be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 419 heartbeat osd_stat(store_statfs(0x4cbe54000/0x0/0x4ffc00000, data 0x2d62176a/0x2d4ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174604288 unmapped: 25395200 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 419 ms_handle_reset con 0x5607b74d9000 session 0x5607b44ab4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 419 ms_handle_reset con 0x5607b1cbc400 session 0x5607b3282f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 419 ms_handle_reset con 0x5607b2dcd800 session 0x5607b3316960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 25190400 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 420 ms_handle_reset con 0x5607b2dfe800 session 0x5607b319e780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7674923 data_alloc: 234881024 data_used: 23654400
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 420 ms_handle_reset con 0x5607b74d9000 session 0x5607b4ace000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 420 ms_handle_reset con 0x5607b3300c00 session 0x5607b3f5b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 421 ms_handle_reset con 0x5607b5d24800 session 0x5607b4a79680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 421 heartbeat osd_stat(store_statfs(0x4cb474000/0x0/0x4ffc00000, data 0x2e046326/0x2de98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 421 ms_handle_reset con 0x5607b1cbc400 session 0x5607b24ba3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175308800 unmapped: 24690688 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 421 ms_handle_reset con 0x5607b53ff400 session 0x5607b32823c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175316992 unmapped: 24682496 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 421 heartbeat osd_stat(store_statfs(0x4cb472000/0x0/0x4ffc00000, data 0x2e0484e3/0x2de9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175325184 unmapped: 24674304 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 422 ms_handle_reset con 0x5607b2dcd800 session 0x5607b4ef25a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 422 ms_handle_reset con 0x5607b3300c00 session 0x5607b248bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175325184 unmapped: 24674304 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175341568 unmapped: 24657920 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7748619 data_alloc: 234881024 data_used: 23650304
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 422 heartbeat osd_stat(store_statfs(0x4cb46f000/0x0/0x4ffc00000, data 0x2e049a74/0x2de9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.697992325s of 10.326239586s, submitted: 178
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 19K writes, 76K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 19K writes, 6710 syncs, 2.94 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9085 writes, 31K keys, 9085 commit groups, 1.0 writes per commit group, ingest: 26.52 MB, 0.04 MB/s#012Interval WAL: 9085 writes, 3704 syncs, 2.45 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 422 ms_handle_reset con 0x5607b2dfe800 session 0x5607b248b0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175341568 unmapped: 24657920 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 423 ms_handle_reset con 0x5607b1cbc400 session 0x5607b539ba40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 423 ms_handle_reset con 0x5607b2dcd800 session 0x5607b539a3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175423488 unmapped: 24576000 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 423 ms_handle_reset con 0x5607b3300c00 session 0x5607b539a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 423 ms_handle_reset con 0x5607b5d24800 session 0x5607b4150780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 423 ms_handle_reset con 0x5607b53ff400 session 0x5607b248ab40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175431680 unmapped: 24567808 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175439872 unmapped: 24559616 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 423 heartbeat osd_stat(store_statfs(0x4cc297000/0x0/0x4ffc00000, data 0x2d1d9fb3/0x2d077000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 424 ms_handle_reset con 0x5607b1cbc400 session 0x5607b4471a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175439872 unmapped: 24559616 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 424 ms_handle_reset con 0x5607b2dfe800 session 0x5607b422e000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 424 ms_handle_reset con 0x5607b2dcd800 session 0x5607b24bbe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7652182 data_alloc: 234881024 data_used: 23666688
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175456256 unmapped: 24543232 heap: 199999488 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 176726016 unmapped: 31678464 heap: 208404480 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 190824448 unmapped: 21782528 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 heartbeat osd_stat(store_statfs(0x4c6a81000/0x0/0x4ffc00000, data 0x329e9693/0x3288d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,3,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 186859520 unmapped: 25747456 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 ms_handle_reset con 0x5607b3300c00 session 0x5607b2448960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 178569216 unmapped: 34037760 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8502055 data_alloc: 234881024 data_used: 23687168
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.406890392s of 10.008652687s, submitted: 138
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 183058432 unmapped: 29548544 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 188530688 unmapped: 24076288 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 ms_handle_reset con 0x5607b1cbc400 session 0x5607b44af680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 188760064 unmapped: 23846912 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 heartbeat osd_stat(store_statfs(0x4bee81000/0x0/0x4ffc00000, data 0x3a5e9693/0x3a48d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 180527104 unmapped: 32079872 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 193404928 unmapped: 19202048 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 9420279 data_alloc: 234881024 data_used: 23691264
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 185139200 unmapped: 27467776 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 ms_handle_reset con 0x5607b2dfe800 session 0x5607b4ef2d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 heartbeat osd_stat(store_statfs(0x4ba5c4000/0x0/0x4ffc00000, data 0x3ea96693/0x3e93a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 186343424 unmapped: 26263552 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 heartbeat osd_stat(store_statfs(0x4ba5c4000/0x0/0x4ffc00000, data 0x3ea96693/0x3e93a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 186032128 unmapped: 26574848 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 ms_handle_reset con 0x5607b74d9c00 session 0x5607b423c960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 ms_handle_reset con 0x5607b74d9000 session 0x5607b4b1b4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 ms_handle_reset con 0x5607b2dff400 session 0x5607b33172c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 ms_handle_reset con 0x5607b2dcd800 session 0x5607b48e12c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 ms_handle_reset con 0x5607b1cbc400 session 0x5607b20a65a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181911552 unmapped: 30695424 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 ms_handle_reset con 0x5607b74d9000 session 0x5607b4ac0d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 179757056 unmapped: 32849920 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 ms_handle_reset con 0x5607b2dfe800 session 0x5607b4ef3c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 ms_handle_reset con 0x5607b74d9c00 session 0x5607b63f0f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7752234 data_alloc: 234881024 data_used: 23691264
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 ms_handle_reset con 0x5607b1cbc400 session 0x5607b44af2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.314919472s of 10.024044991s, submitted: 182
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 179798016 unmapped: 32808960 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 ms_handle_reset con 0x5607b2dfe800 session 0x5607b3316d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 426 ms_handle_reset con 0x5607b2dcd800 session 0x5607b4f090e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 426 ms_handle_reset con 0x5607b74d9000 session 0x5607b539b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 179798016 unmapped: 32808960 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 426 ms_handle_reset con 0x5607b2dffc00 session 0x5607b423d680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 426 ms_handle_reset con 0x5607b1cbc400 session 0x5607b4ef3a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 426 heartbeat osd_stat(store_statfs(0x4cbe4e000/0x0/0x4ffc00000, data 0x2d20310e/0x2d0a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 426 ms_handle_reset con 0x5607b2dcd800 session 0x5607b63f0f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 179806208 unmapped: 32800768 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 426 ms_handle_reset con 0x5607b2dfe800 session 0x5607b20a65a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 179806208 unmapped: 32800768 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 426 ms_handle_reset con 0x5607b6207000 session 0x5607b4b1a000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 427 handle_osd_map epochs [427,427], i have 427, src has [1,427]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 427 ms_handle_reset con 0x5607b3fa6c00 session 0x5607b4a78960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 179830784 unmapped: 32776192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7729317 data_alloc: 234881024 data_used: 23842816
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 428 ms_handle_reset con 0x5607b1cbc400 session 0x5607b3315680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 428 ms_handle_reset con 0x5607b74d9000 session 0x5607b48e12c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 428 ms_handle_reset con 0x5607b2dfe800 session 0x5607b4471a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 177307648 unmapped: 35299328 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 429 ms_handle_reset con 0x5607b2dcd800 session 0x5607b248a780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 429 heartbeat osd_stat(store_statfs(0x4cbe74000/0x0/0x4ffc00000, data 0x2ce3983c/0x2d087000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 429 ms_handle_reset con 0x5607b6207000 session 0x5607b3f5b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 177324032 unmapped: 35282944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 430 ms_handle_reset con 0x5607b1cbc400 session 0x5607b3282f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 177324032 unmapped: 35282944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 430 ms_handle_reset con 0x5607b2dcd800 session 0x5607b539a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 431 ms_handle_reset con 0x5607b2dfe800 session 0x5607b44aad20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 431 ms_handle_reset con 0x5607b74d9000 session 0x5607b63f0d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174915584 unmapped: 37691392 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 431 ms_handle_reset con 0x5607b31a8400 session 0x5607b4a78b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 431 ms_handle_reset con 0x5607b31a9000 session 0x5607b44714a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167682048 unmapped: 44924928 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 431 heartbeat osd_stat(store_statfs(0x4e32fb000/0x0/0x4ffc00000, data 0x159b0c17/0x15c01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 431 ms_handle_reset con 0x5607b31a8400 session 0x5607b3f42b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4888549 data_alloc: 218103808 data_used: 7573504
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.737686157s of 10.186094284s, submitted: 359
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 431 handle_osd_map epochs [432,432], i have 432, src has [1,432]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167698432 unmapped: 44908544 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 432 ms_handle_reset con 0x5607b1cbc400 session 0x5607b24bb4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 432 heartbeat osd_stat(store_statfs(0x4e4baf000/0x0/0x4ffc00000, data 0x140fc870/0x1434e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167878656 unmapped: 44728320 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 432 ms_handle_reset con 0x5607b2dcd800 session 0x5607b3282960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167903232 unmapped: 44703744 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167903232 unmapped: 44703744 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167903232 unmapped: 44703744 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3108507 data_alloc: 234881024 data_used: 9662464
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f57b0000/0x0/0x4ffc00000, data 0x34fc870/0x374e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167911424 unmapped: 44695552 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167911424 unmapped: 44695552 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167968768 unmapped: 44638208 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167968768 unmapped: 44638208 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167968768 unmapped: 44638208 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3113737 data_alloc: 234881024 data_used: 9670656
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167968768 unmapped: 44638208 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.602460861s of 10.245586395s, submitted: 125
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 433 ms_handle_reset con 0x5607b2dfe800 session 0x5607b825bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f57ac000/0x0/0x4ffc00000, data 0x34fe333/0x3751000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167968768 unmapped: 44638208 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 433 ms_handle_reset con 0x5607b2dfe800 session 0x5607b32850e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167968768 unmapped: 44638208 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 167968768 unmapped: 44638208 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 434 ms_handle_reset con 0x5607b1cbc400 session 0x5607b2449c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 44572672 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3120762 data_alloc: 234881024 data_used: 9678848
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f57a8000/0x0/0x4ffc00000, data 0x34fff12/0x3755000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 434 ms_handle_reset con 0x5607b2dcd800 session 0x5607b3284d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168042496 unmapped: 44564480 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 435 ms_handle_reset con 0x5607b31a8400 session 0x5607b44af860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 435 ms_handle_reset con 0x5607b31a9000 session 0x5607b3316d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168083456 unmapped: 44523520 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 435 ms_handle_reset con 0x5607b1cbc400 session 0x5607b825b680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168017920 unmapped: 44589056 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 435 ms_handle_reset con 0x5607b2dcd800 session 0x5607b539af00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 435 ms_handle_reset con 0x5607b2dfe800 session 0x5607b4470d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168017920 unmapped: 44589056 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168017920 unmapped: 44589056 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3119559 data_alloc: 218103808 data_used: 9662464
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168017920 unmapped: 44589056 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f57a7000/0x0/0x4ffc00000, data 0x3501a81/0x3757000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.679579735s of 10.065548897s, submitted: 70
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f57a7000/0x0/0x4ffc00000, data 0x3501a81/0x3757000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168017920 unmapped: 44589056 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168017920 unmapped: 44589056 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168017920 unmapped: 44589056 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168017920 unmapped: 44589056 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3123733 data_alloc: 218103808 data_used: 9670656
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 436 ms_handle_reset con 0x5607b3300c00 session 0x5607b422f4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 436 ms_handle_reset con 0x5607b53ff400 session 0x5607b63f03c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168075264 unmapped: 44531712 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 436 ms_handle_reset con 0x5607b1cbc400 session 0x5607b825a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f57a4000/0x0/0x4ffc00000, data 0x35034e4/0x375a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 436 ms_handle_reset con 0x5607b2dcd800 session 0x5607b44ae000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168075264 unmapped: 44531712 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168075264 unmapped: 44531712 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 436 ms_handle_reset con 0x5607b3300c00 session 0x5607b4ac0f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168083456 unmapped: 44523520 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 437 ms_handle_reset con 0x5607b31a8400 session 0x5607b248be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168091648 unmapped: 44515328 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126748 data_alloc: 234881024 data_used: 9707520
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 438 ms_handle_reset con 0x5607b74d9000 session 0x5607b31c9c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 438 ms_handle_reset con 0x5607b4163400 session 0x5607b4ac0000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 438 ms_handle_reset con 0x5607b2dfe800 session 0x5607b3fa85a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 438 heartbeat osd_stat(store_statfs(0x4f57c2000/0x0/0x4ffc00000, data 0x34e10d3/0x373b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 438 ms_handle_reset con 0x5607b1cbc400 session 0x5607b4471860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162693120 unmapped: 49913856 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.016571045s of 10.559726715s, submitted: 71
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 438 ms_handle_reset con 0x5607b31a8400 session 0x5607b44714a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162693120 unmapped: 49913856 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162701312 unmapped: 49905664 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 439 ms_handle_reset con 0x5607b3300c00 session 0x5607b63f0d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 439 ms_handle_reset con 0x5607b74d9000 session 0x5607b3316000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 49889280 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 440 ms_handle_reset con 0x5607b2dcd800 session 0x5607b63f1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 441 ms_handle_reset con 0x5607b1cbc400 session 0x5607b4f090e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 49889280 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2810083 data_alloc: 218103808 data_used: 1572864
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 441 ms_handle_reset con 0x5607b3300c00 session 0x5607b3315860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f7fb1000/0x0/0x4ffc00000, data 0xce8041/0xf4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 49889280 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f7fb1000/0x0/0x4ffc00000, data 0xce8041/0xf4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 49889280 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 441 ms_handle_reset con 0x5607b2dfe800 session 0x5607b4acef00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162725888 unmapped: 49881088 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 442 ms_handle_reset con 0x5607b2dcd800 session 0x5607b31c92c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 442 handle_osd_map epochs [442,443], i have 442, src has [1,443]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 442 handle_osd_map epochs [443,443], i have 443, src has [1,443]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 443 ms_handle_reset con 0x5607b2dfe800 session 0x5607b33172c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162750464 unmapped: 49856512 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 443 ms_handle_reset con 0x5607b1cbc400 session 0x5607b4ac1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162750464 unmapped: 49856512 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2816942 data_alloc: 218103808 data_used: 1593344
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f7fad000/0x0/0x4ffc00000, data 0xceb773/0xf50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 444 ms_handle_reset con 0x5607b74d9000 session 0x5607b423d2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162750464 unmapped: 49856512 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 444 ms_handle_reset con 0x5607b3300c00 session 0x5607b4151c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 444 ms_handle_reset con 0x5607b31a8400 session 0x5607b3284780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162750464 unmapped: 49856512 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.990588188s of 10.706424713s, submitted: 100
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 444 ms_handle_reset con 0x5607b1cbc400 session 0x5607b2449c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162775040 unmapped: 49831936 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162775040 unmapped: 49831936 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 444 ms_handle_reset con 0x5607b2dcd800 session 0x5607b4b1a000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162775040 unmapped: 49831936 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2821084 data_alloc: 218103808 data_used: 1605632
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 445 ms_handle_reset con 0x5607b4163400 session 0x5607b3283860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f7fab000/0x0/0x4ffc00000, data 0xced522/0xf53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 445 handle_osd_map epochs [445,446], i have 445, src has [1,446]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 446 ms_handle_reset con 0x5607b74d9000 session 0x5607b3fa8000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162816000 unmapped: 49790976 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f7fa1000/0x0/0x4ffc00000, data 0xcf0d89/0xf5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 446 ms_handle_reset con 0x5607b2dfe800 session 0x5607b248a780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162816000 unmapped: 49790976 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 446 ms_handle_reset con 0x5607b24b9400 session 0x5607b24492c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 446 ms_handle_reset con 0x5607b1cbc400 session 0x5607b825ab40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162816000 unmapped: 49790976 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f7fa3000/0x0/0x4ffc00000, data 0xcf0d89/0xf5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162816000 unmapped: 49790976 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 447 ms_handle_reset con 0x5607b2dcd800 session 0x5607b4f083c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 447 ms_handle_reset con 0x5607b31a8400 session 0x5607b3314780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162816000 unmapped: 49790976 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2834026 data_alloc: 218103808 data_used: 1622016
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 447 ms_handle_reset con 0x5607b1cbc400 session 0x5607b44afa40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 162816000 unmapped: 49790976 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163700736 unmapped: 48906240 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.757455826s of 10.303757668s, submitted: 151
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 447 ms_handle_reset con 0x5607b2dcd800 session 0x5607b48e1680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163725312 unmapped: 48881664 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 448 ms_handle_reset con 0x5607b2dfe800 session 0x5607b423c000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 48873472 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 449 heartbeat osd_stat(store_statfs(0x4f7f9c000/0x0/0x4ffc00000, data 0xcf44f3/0xf61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 449 ms_handle_reset con 0x5607b4163400 session 0x5607b4f090e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 449 ms_handle_reset con 0x5607b2dfe000 session 0x5607b4ac0b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 449 ms_handle_reset con 0x5607b24b9400 session 0x5607b4150780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163749888 unmapped: 48857088 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2847884 data_alloc: 218103808 data_used: 1634304
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 450 ms_handle_reset con 0x5607b1cbc400 session 0x5607b4f09860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163766272 unmapped: 48840704 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 450 ms_handle_reset con 0x5607b2dcd800 session 0x5607b2d734a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f7f92000/0x0/0x4ffc00000, data 0xcf81ec/0xf69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163766272 unmapped: 48840704 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163766272 unmapped: 48840704 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f7f95000/0x0/0x4ffc00000, data 0xcf81ec/0xf69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 450 ms_handle_reset con 0x5607b4163400 session 0x5607b24ca000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 450 ms_handle_reset con 0x5607b2dfe800 session 0x5607b539a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 450 ms_handle_reset con 0x5607b1cbc400 session 0x5607b44aad20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 450 ms_handle_reset con 0x5607b24b9400 session 0x5607b3316000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163807232 unmapped: 48799744 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 450 handle_osd_map epochs [450,451], i have 450, src has [1,451]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 451 ms_handle_reset con 0x5607b4163400 session 0x5607b4471860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163807232 unmapped: 48799744 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 451 ms_handle_reset con 0x5607b4198c00 session 0x5607b3f5a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 451 ms_handle_reset con 0x5607b2dcd800 session 0x5607b3282780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104416 data_alloc: 218103808 data_used: 1646592
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163815424 unmapped: 48791552 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f5b92000/0x0/0x4ffc00000, data 0x30f9d22/0x336b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163815424 unmapped: 48791552 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.361485481s of 10.042098999s, submitted: 109
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 453 handle_osd_map epochs [453,454], i have 453, src has [1,454]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163823616 unmapped: 48783360 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 454 ms_handle_reset con 0x5607b1cbc400 session 0x5607b4f08000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 454 ms_handle_reset con 0x5607b24b9400 session 0x5607b3fa8000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163840000 unmapped: 48766976 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 454 handle_osd_map epochs [454,455], i have 454, src has [1,455]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 454 handle_osd_map epochs [455,455], i have 455, src has [1,455]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163840000 unmapped: 48766976 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3111822 data_alloc: 218103808 data_used: 1642496
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 455 ms_handle_reset con 0x5607b4163400 session 0x5607b4ef30e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 455 heartbeat osd_stat(store_statfs(0x4f5b8b000/0x0/0x4ffc00000, data 0x3100515/0x3373000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163840000 unmapped: 48766976 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163840000 unmapped: 48766976 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 456 ms_handle_reset con 0x5607b4198c00 session 0x5607b44ab4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 456 handle_osd_map epochs [456,457], i have 456, src has [1,457]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 457 handle_osd_map epochs [457,457], i have 457, src has [1,457]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163856384 unmapped: 48750592 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 457 heartbeat osd_stat(store_statfs(0x4f5b88000/0x0/0x4ffc00000, data 0x31020b0/0x3375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163880960 unmapped: 48726016 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 457 handle_osd_map epochs [457,458], i have 457, src has [1,458]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 458 ms_handle_reset con 0x5607b3270c00 session 0x5607b44aef00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 458 ms_handle_reset con 0x5607b1cbc400 session 0x5607b825b2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163905536 unmapped: 48701440 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 458 ms_handle_reset con 0x5607b24b9400 session 0x5607b422f680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3120641 data_alloc: 218103808 data_used: 1658880
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163930112 unmapped: 48676864 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 459 heartbeat osd_stat(store_statfs(0x4f5b84000/0x0/0x4ffc00000, data 0x3105819/0x3379000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163938304 unmapped: 48668672 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163946496 unmapped: 48660480 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 459 heartbeat osd_stat(store_statfs(0x4f5b81000/0x0/0x4ffc00000, data 0x31072ec/0x337c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 163987456 unmapped: 48619520 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 48570368 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3168207 data_alloc: 218103808 data_used: 7942144
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 459 handle_osd_map epochs [459,460], i have 459, src has [1,460]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.607280731s of 13.098195076s, submitted: 168
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 48570368 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 460 ms_handle_reset con 0x5607b4198c00 session 0x5607b4ac0000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 48570368 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 164036608 unmapped: 48570368 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 460 heartbeat osd_stat(store_statfs(0x4f5b7d000/0x0/0x4ffc00000, data 0x3108de9/0x3380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 460 ms_handle_reset con 0x5607b4166400 session 0x5607b248be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 164044800 unmapped: 48562176 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 460 handle_osd_map epochs [460,461], i have 460, src has [1,461]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 461 ms_handle_reset con 0x5607b3296000 session 0x5607b44afa40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 48553984 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3178528 data_alloc: 218103808 data_used: 7966720
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 461 heartbeat osd_stat(store_statfs(0x4f5b7a000/0x0/0x4ffc00000, data 0x310a966/0x3383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 461 handle_osd_map epochs [462,462], i have 462, src has [1,462]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 164061184 unmapped: 48545792 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 462 ms_handle_reset con 0x5607b4199400 session 0x5607b3eed680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 462 ms_handle_reset con 0x5607b60b4800 session 0x5607b4ac0f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 164085760 unmapped: 48521216 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165158912 unmapped: 47448064 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 463 ms_handle_reset con 0x5607b3296000 session 0x5607b63f03c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165167104 unmapped: 47439872 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 463 heartbeat osd_stat(store_statfs(0x4f5765000/0x0/0x4ffc00000, data 0x310e0b4/0x3389000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 464 ms_handle_reset con 0x5607b1cbc400 session 0x5607b4a78b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 165814272 unmapped: 46792704 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3193363 data_alloc: 218103808 data_used: 7958528
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 464 ms_handle_reset con 0x5607b24b9400 session 0x5607b63f0f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.396866798s of 10.172111511s, submitted: 138
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173588480 unmapped: 39018496 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 37593088 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 37593088 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 37593088 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 37593088 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219059 data_alloc: 218103808 data_used: 8093696
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 464 heartbeat osd_stat(store_statfs(0x4f5318000/0x0/0x4ffc00000, data 0x338ac3f/0x3606000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 464 handle_osd_map epochs [464,465], i have 464, src has [1,465]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171696128 unmapped: 40910848 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171696128 unmapped: 40910848 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f54e4000/0x0/0x4ffc00000, data 0x338c6da/0x3609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 465 handle_osd_map epochs [465,466], i have 465, src has [1,466]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171753472 unmapped: 40853504 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 466 ms_handle_reset con 0x5607b1cbc400 session 0x5607b20a65a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f54e1000/0x0/0x4ffc00000, data 0x338e257/0x360c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 466 ms_handle_reset con 0x5607b3270c00 session 0x5607b32825a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 466 ms_handle_reset con 0x5607b4163400 session 0x5607b3283a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 40771584 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 466 ms_handle_reset con 0x5607b3296000 session 0x5607b32825a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 40771584 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217397 data_alloc: 218103808 data_used: 8097792
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 40771584 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 40771584 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 467 heartbeat osd_stat(store_statfs(0x4f54df000/0x0/0x4ffc00000, data 0x338fe05/0x360e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 40771584 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 40771584 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.030660629s of 14.218343735s, submitted: 60
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f54df000/0x0/0x4ffc00000, data 0x338fe05/0x360e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [0,0,0,0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 40771584 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 468 ms_handle_reset con 0x5607b4199400 session 0x5607b4a78b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3222347 data_alloc: 218103808 data_used: 8101888
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f54df000/0x0/0x4ffc00000, data 0x338fe05/0x360e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 468 handle_osd_map epochs [468,469], i have 468, src has [1,469]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 40771584 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 469 ms_handle_reset con 0x5607b1cbc400 session 0x5607b63f03c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 40771584 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 469 handle_osd_map epochs [469,470], i have 469, src has [1,470]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 470 ms_handle_reset con 0x5607b3270c00 session 0x5607b4ac0f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171884544 unmapped: 40722432 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171884544 unmapped: 40722432 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 470 ms_handle_reset con 0x5607b3296000 session 0x5607b44afa40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 470 heartbeat osd_stat(store_statfs(0x4f54d6000/0x0/0x4ffc00000, data 0x3394fd2/0x3617000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171884544 unmapped: 40722432 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3226479 data_alloc: 218103808 data_used: 8101888
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 470 ms_handle_reset con 0x5607b4163400 session 0x5607b248be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171892736 unmapped: 40714240 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171892736 unmapped: 40714240 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 470 handle_osd_map epochs [470,471], i have 470, src has [1,471]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 471 ms_handle_reset con 0x5607b4199400 session 0x5607b4ac0000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 471 ms_handle_reset con 0x5607b1cbc400 session 0x5607b825b2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171900928 unmapped: 40706048 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 472 ms_handle_reset con 0x5607b3270c00 session 0x5607b44aef00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171909120 unmapped: 40697856 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 472 ms_handle_reset con 0x5607b3296000 session 0x5607b4ef30e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 472 ms_handle_reset con 0x5607b4163400 session 0x5607b4f08000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 472 heartbeat osd_stat(store_statfs(0x4f54cd000/0x0/0x4ffc00000, data 0x339874c/0x361d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171909120 unmapped: 40697856 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 472 ms_handle_reset con 0x5607b4199400 session 0x5607b3f5a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3235382 data_alloc: 218103808 data_used: 8110080
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.304541588s of 10.502944946s, submitted: 61
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 472 ms_handle_reset con 0x5607b1cbc400 session 0x5607b3316000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 473 ms_handle_reset con 0x5607b3270c00 session 0x5607b44aad20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 473 ms_handle_reset con 0x5607b3296000 session 0x5607b539a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171933696 unmapped: 40673280 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171941888 unmapped: 40665088 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 473 handle_osd_map epochs [473,474], i have 473, src has [1,474]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 474 ms_handle_reset con 0x5607b60b4800 session 0x5607b4ac0d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171941888 unmapped: 40665088 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 474 ms_handle_reset con 0x5607b4166400 session 0x5607b44705a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 474 handle_osd_map epochs [474,475], i have 474, src has [1,475]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 475 ms_handle_reset con 0x5607b1cbc400 session 0x5607b422e780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 475 heartbeat osd_stat(store_statfs(0x4f54c8000/0x0/0x4ffc00000, data 0x339bdf9/0x3626000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171941888 unmapped: 40665088 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 475 ms_handle_reset con 0x5607b3270c00 session 0x5607b3fa81e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171941888 unmapped: 40665088 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3249679 data_alloc: 218103808 data_used: 8167424
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 475 handle_osd_map epochs [475,476], i have 475, src has [1,476]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 476 ms_handle_reset con 0x5607b60b4800 session 0x5607b3283860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 40640512 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 40640512 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171966464 unmapped: 40640512 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 477 ms_handle_reset con 0x5607b4198c00 session 0x5607b423cd20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171974656 unmapped: 40632320 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 477 heartbeat osd_stat(store_statfs(0x4f6500000/0x0/0x4ffc00000, data 0x339f459/0x362c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171974656 unmapped: 40632320 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3257568 data_alloc: 218103808 data_used: 8175616
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 477 ms_handle_reset con 0x5607b3296800 session 0x5607b4f092c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.836385727s of 10.009651184s, submitted: 67
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 477 handle_osd_map epochs [477,478], i have 477, src has [1,478]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 478 ms_handle_reset con 0x5607b1cbc400 session 0x5607b4f09680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171991040 unmapped: 40615936 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171991040 unmapped: 40615936 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 478 ms_handle_reset con 0x5607b3270c00 session 0x5607b3284d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171991040 unmapped: 40615936 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 171991040 unmapped: 40615936 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 478 heartbeat osd_stat(store_statfs(0x4f64fd000/0x0/0x4ffc00000, data 0x33a2b99/0x3631000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173039616 unmapped: 39567360 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263142 data_alloc: 218103808 data_used: 8445952
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 479 ms_handle_reset con 0x5607b3296800 session 0x5607b3f734a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173039616 unmapped: 39567360 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 479 heartbeat osd_stat(store_statfs(0x4f64fd000/0x0/0x4ffc00000, data 0x33a2b99/0x3631000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 479 ms_handle_reset con 0x5607b4198c00 session 0x5607b44714a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173039616 unmapped: 39567360 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 479 ms_handle_reset con 0x5607b60b4800 session 0x5607b422f680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173039616 unmapped: 39567360 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173039616 unmapped: 39567360 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 480 ms_handle_reset con 0x5607b1cbc400 session 0x5607b3f432c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173039616 unmapped: 39567360 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3278920 data_alloc: 218103808 data_used: 9322496
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f64f5000/0x0/0x4ffc00000, data 0x33a6375/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 480 handle_osd_map epochs [481,481], i have 481, src has [1,481]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.223190308s of 10.500252724s, submitted: 56
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173211648 unmapped: 39395328 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 481 handle_osd_map epochs [481,482], i have 481, src has [1,482]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 482 ms_handle_reset con 0x5607b3270c00 session 0x5607b63f01e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173219840 unmapped: 39387136 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173219840 unmapped: 39387136 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 482 ms_handle_reset con 0x5607b3296800 session 0x5607b31c9e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 483 ms_handle_reset con 0x5607b4198c00 session 0x5607b3fa9c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173219840 unmapped: 39387136 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173219840 unmapped: 39387136 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3286961 data_alloc: 218103808 data_used: 9326592
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 483 ms_handle_reset con 0x5607b74dac00 session 0x5607b3282960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173219840 unmapped: 39387136 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f64ee000/0x0/0x4ffc00000, data 0x33ab50c/0x3640000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 484 ms_handle_reset con 0x5607b1cbc400 session 0x5607b214dc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173301760 unmapped: 39305216 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173301760 unmapped: 39305216 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 484 handle_osd_map epochs [484,485], i have 484, src has [1,485]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 40517632 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 40517632 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3293413 data_alloc: 218103808 data_used: 9330688
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 485 ms_handle_reset con 0x5607b3270c00 session 0x5607b3282f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.858673096s of 10.016912460s, submitted: 52
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172113920 unmapped: 40493056 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 486 handle_osd_map epochs [486,487], i have 486, src has [1,487]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 487 heartbeat osd_stat(store_statfs(0x4f64e4000/0x0/0x4ffc00000, data 0x33b0587/0x3649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172122112 unmapped: 40484864 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172122112 unmapped: 40484864 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172122112 unmapped: 40484864 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 487 ms_handle_reset con 0x5607b3296800 session 0x5607b4a792c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172130304 unmapped: 40476672 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301859 data_alloc: 218103808 data_used: 9330688
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172130304 unmapped: 40476672 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 487 heartbeat osd_stat(store_statfs(0x4f64e2000/0x0/0x4ffc00000, data 0x33b2158/0x364c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 488 ms_handle_reset con 0x5607b4198c00 session 0x5607b3f42d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172130304 unmapped: 40476672 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172130304 unmapped: 40476672 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 488 ms_handle_reset con 0x5607b54fec00 session 0x5607b422eb40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 489 ms_handle_reset con 0x5607b1cbc400 session 0x5607b3f5ba40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172130304 unmapped: 40476672 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 489 ms_handle_reset con 0x5607b3270c00 session 0x5607b41510e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 489 ms_handle_reset con 0x5607b3296800 session 0x5607b4b1b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172130304 unmapped: 40476672 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3305478 data_alloc: 218103808 data_used: 9338880
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172130304 unmapped: 40476672 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.280503273s of 10.555062294s, submitted: 78
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 490 ms_handle_reset con 0x5607b4163400 session 0x5607b4f090e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 490 ms_handle_reset con 0x5607b3296000 session 0x5607b3f5b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172220416 unmapped: 40386560 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f64d8000/0x0/0x4ffc00000, data 0x33b7351/0x3655000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 490 ms_handle_reset con 0x5607b3296000 session 0x5607b31c9e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172244992 unmapped: 40361984 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172244992 unmapped: 40361984 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172244992 unmapped: 40361984 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3312547 data_alloc: 234881024 data_used: 10182656
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f64da000/0x0/0x4ffc00000, data 0x33b731e/0x3653000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172244992 unmapped: 40361984 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 172244992 unmapped: 40361984 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 ms_handle_reset con 0x5607b1cbc400 session 0x5607b4ac0d20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168525824 unmapped: 44081152 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f8b52000/0x0/0x4ffc00000, data 0xd3dd81/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168525824 unmapped: 44081152 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168525824 unmapped: 44081152 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993369 data_alloc: 218103808 data_used: 1814528
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168525824 unmapped: 44081152 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f8b52000/0x0/0x4ffc00000, data 0xd3dd81/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168525824 unmapped: 44081152 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168525824 unmapped: 44081152 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168525824 unmapped: 44081152 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f8b52000/0x0/0x4ffc00000, data 0xd3dd81/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f8b52000/0x0/0x4ffc00000, data 0xd3dd81/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168525824 unmapped: 44081152 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993369 data_alloc: 218103808 data_used: 1814528
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168525824 unmapped: 44081152 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.532420158s of 14.955132484s, submitted: 57
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 ms_handle_reset con 0x5607b3270c00 session 0x5607b3316000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168525824 unmapped: 44081152 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 ms_handle_reset con 0x5607b3296800 session 0x5607b3f5a960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 ms_handle_reset con 0x5607b4163400 session 0x5607b44aef00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168542208 unmapped: 44064768 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 ms_handle_reset con 0x5607b1cbc400 session 0x5607b44714a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168542208 unmapped: 44064768 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f8b53000/0x0/0x4ffc00000, data 0xd3dd81/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168542208 unmapped: 44064768 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2995176 data_alloc: 218103808 data_used: 1814528
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 ms_handle_reset con 0x5607b3270c00 session 0x5607b248d680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f8b53000/0x0/0x4ffc00000, data 0xd3dd81/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2994303 data_alloc: 218103808 data_used: 1814528
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f8b53000/0x0/0x4ffc00000, data 0xd3dd81/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2994303 data_alloc: 218103808 data_used: 1814528
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f8b53000/0x0/0x4ffc00000, data 0xd3dd81/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2994303 data_alloc: 218103808 data_used: 1814528
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f8b53000/0x0/0x4ffc00000, data 0xd3dd81/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2994303 data_alloc: 218103808 data_used: 1814528
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f8b53000/0x0/0x4ffc00000, data 0xd3dd81/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f8b53000/0x0/0x4ffc00000, data 0xd3dd81/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f8b53000/0x0/0x4ffc00000, data 0xd3dd81/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2994303 data_alloc: 218103808 data_used: 1814528
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f8b53000/0x0/0x4ffc00000, data 0xd3dd81/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.078527451s of 31.167495728s, submitted: 25
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 ms_handle_reset con 0x5607b3296000 session 0x5607b19bb2c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 491 handle_osd_map epochs [491,492], i have 491, src has [1,492]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 492 ms_handle_reset con 0x5607b3296800 session 0x5607b24bb4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 492 heartbeat osd_stat(store_statfs(0x4f8b52000/0x0/0x4ffc00000, data 0xd3dd90/0xfdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000305 data_alloc: 218103808 data_used: 1822720
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 492 heartbeat osd_stat(store_statfs(0x4f8b4e000/0x0/0x4ffc00000, data 0xd3f90d/0xfdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 44056576 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 492 ms_handle_reset con 0x5607b579d400 session 0x5607b422fc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 492 ms_handle_reset con 0x5607b4198c00 session 0x5607b423dc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168558592 unmapped: 44048384 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 492 heartbeat osd_stat(store_statfs(0x4f8b4f000/0x0/0x4ffc00000, data 0xd3f90d/0xfdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 492 ms_handle_reset con 0x5607b1cbc400 session 0x5607b4ac1e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 492 ms_handle_reset con 0x5607b3270c00 session 0x5607b2d72b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168558592 unmapped: 44048384 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168558592 unmapped: 44048384 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 492 ms_handle_reset con 0x5607b3296000 session 0x5607b20bd860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 492 ms_handle_reset con 0x5607b3296800 session 0x5607b4ef3a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 492 handle_osd_map epochs [492,493], i have 492, src has [1,493]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 493 ms_handle_reset con 0x5607b1cbc400 session 0x5607b539be00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 493 ms_handle_reset con 0x5607b3270c00 session 0x5607b3eec780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004200 data_alloc: 218103808 data_used: 1830912
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 493 ms_handle_reset con 0x5607b3296000 session 0x5607b20a74a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f8b4c000/0x0/0x4ffc00000, data 0xd414cf/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [0,1])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 493 ms_handle_reset con 0x5607b4198c00 session 0x5607b3284960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f8b4c000/0x0/0x4ffc00000, data 0xd414cf/0xfe1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3002728 data_alloc: 218103808 data_used: 1830912
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 493 handle_osd_map epochs [494,494], i have 493, src has [1,494]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.353340149s of 13.498845100s, submitted: 66
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f8b49000/0x0/0x4ffc00000, data 0xd42f32/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006902 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f8b49000/0x0/0x4ffc00000, data 0xd42f32/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006902 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f8b49000/0x0/0x4ffc00000, data 0xd42f32/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f8b49000/0x0/0x4ffc00000, data 0xd42f32/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006902 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f8b49000/0x0/0x4ffc00000, data 0xd42f32/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006902 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006902 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f8b49000/0x0/0x4ffc00000, data 0xd42f32/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f8b49000/0x0/0x4ffc00000, data 0xd42f32/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f8b49000/0x0/0x4ffc00000, data 0xd42f32/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006902 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f8b49000/0x0/0x4ffc00000, data 0xd42f32/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006902 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f8b49000/0x0/0x4ffc00000, data 0xd42f32/0xfe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.139213562s of 38.185298920s, submitted: 10
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 494 ms_handle_reset con 0x5607b5fd6000 session 0x5607b3f43c20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3008730 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 44040192 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f8b48000/0x0/0x4ffc00000, data 0xd42f42/0xfe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 494 handle_osd_map epochs [494,495], i have 494, src has [1,495]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b1cbc400 session 0x5607b63f1860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168574976 unmapped: 44032000 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168574976 unmapped: 44032000 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b3270c00 session 0x5607b32854a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168574976 unmapped: 44032000 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014081 data_alloc: 218103808 data_used: 1843200
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f8b44000/0x0/0x4ffc00000, data 0xd44b32/0xfea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168574976 unmapped: 44032000 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168574976 unmapped: 44032000 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168574976 unmapped: 44032000 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168574976 unmapped: 44032000 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168574976 unmapped: 44032000 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014081 data_alloc: 218103808 data_used: 1843200
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f8b44000/0x0/0x4ffc00000, data 0xd44b32/0xfea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b3296000 session 0x5607b319e3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b4198c00 session 0x5607b4f08780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168574976 unmapped: 44032000 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b5fd6000 session 0x5607b4470780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b1cbc400 session 0x5607b423d0e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.485933304s of 12.507762909s, submitted: 6
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b3270c00 session 0x5607b2448000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b3296000 session 0x5607b4acf4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b4198c00 session 0x5607b44ae3c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b453dc00 session 0x5607b4ef23c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b1cbc400 session 0x5607b4ac03c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 169050112 unmapped: 43556864 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 169050112 unmapped: 43556864 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 169050112 unmapped: 43556864 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 169050112 unmapped: 43556864 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3042387 data_alloc: 218103808 data_used: 1843200
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f8946000/0x0/0x4ffc00000, data 0xf42b32/0x11e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 169050112 unmapped: 43556864 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b3270c00 session 0x5607b20a7e00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 169058304 unmapped: 43548672 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 169058304 unmapped: 43548672 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168181760 unmapped: 44425216 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168181760 unmapped: 44425216 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057576 data_alloc: 218103808 data_used: 3743744
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168181760 unmapped: 44425216 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f8945000/0x0/0x4ffc00000, data 0xf42b55/0x11e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168255488 unmapped: 44351488 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168255488 unmapped: 44351488 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168255488 unmapped: 44351488 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168255488 unmapped: 44351488 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057576 data_alloc: 218103808 data_used: 3743744
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f8945000/0x0/0x4ffc00000, data 0xf42b55/0x11e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168255488 unmapped: 44351488 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f8945000/0x0/0x4ffc00000, data 0xf42b55/0x11e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f8945000/0x0/0x4ffc00000, data 0xf42b55/0x11e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168255488 unmapped: 44351488 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168255488 unmapped: 44351488 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 168255488 unmapped: 44351488 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.730253220s of 17.855611801s, submitted: 17
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173481984 unmapped: 39124992 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3140624 data_alloc: 218103808 data_used: 5107712
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f6ddb000/0x0/0x4ffc00000, data 0x190cb55/0x1bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173572096 unmapped: 39034880 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f6ddb000/0x0/0x4ffc00000, data 0x190cb55/0x1bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173572096 unmapped: 39034880 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173572096 unmapped: 39034880 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173572096 unmapped: 39034880 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f6ddb000/0x0/0x4ffc00000, data 0x190cb55/0x1bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173563904 unmapped: 39043072 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137432 data_alloc: 218103808 data_used: 5017600
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173563904 unmapped: 39043072 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173563904 unmapped: 39043072 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173563904 unmapped: 39043072 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173563904 unmapped: 39043072 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173563904 unmapped: 39043072 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3138324 data_alloc: 218103808 data_used: 5021696
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f6dd9000/0x0/0x4ffc00000, data 0x190eb55/0x1bb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173563904 unmapped: 39043072 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173563904 unmapped: 39043072 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.634575844s of 13.010268211s, submitted: 118
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173563904 unmapped: 39043072 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b7871800 session 0x5607b4ac12c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173563904 unmapped: 39043072 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b3296000 session 0x5607b539b4a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b4198c00 session 0x5607b825b860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 173563904 unmapped: 39043072 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3023878 data_alloc: 218103808 data_used: 1851392
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b4198c00 session 0x5607b214dc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174243840 unmapped: 38363136 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f79a3000/0x0/0x4ffc00000, data 0xd44b32/0xfea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f79a3000/0x0/0x4ffc00000, data 0xd44b32/0xfea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174243840 unmapped: 38363136 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174243840 unmapped: 38363136 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b1cbc400 session 0x5607b44710e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174243840 unmapped: 38363136 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b3270c00 session 0x5607b20bd860
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3021414 data_alloc: 218103808 data_used: 1843200
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b3296000 session 0x5607b24cbe00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b7871800 session 0x5607b44af680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f79a6000/0x0/0x4ffc00000, data 0xd44abf/0xfe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3020534 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f79a7000/0x0/0x4ffc00000, data 0xd44aaf/0xfe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f79a7000/0x0/0x4ffc00000, data 0xd44aaf/0xfe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3020534 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f79a7000/0x0/0x4ffc00000, data 0xd44aaf/0xfe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.614530563s of 22.855047226s, submitted: 27
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3024263 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b1cbc400 session 0x5607b3f425a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f79a5000/0x0/0x4ffc00000, data 0xd44b22/0xfe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3024191 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f79a5000/0x0/0x4ffc00000, data 0xd44b22/0xfe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3024191 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 38354944 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b3270c00 session 0x5607b2d72f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.959521294s of 13.976827621s, submitted: 5
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f79a5000/0x0/0x4ffc00000, data 0xd44b22/0xfe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b3296000 session 0x5607b4ef2000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174260224 unmapped: 38346752 heap: 212606976 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 186859520 unmapped: 38354944 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236375 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b4198c00 session 0x5607b2d734a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f5b5f000/0x0/0x4ffc00000, data 0x2b8babf/0x2e2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b5fd6c00 session 0x5607b422f680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3293015 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f535f000/0x0/0x4ffc00000, data 0x338babf/0x362f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f535f000/0x0/0x4ffc00000, data 0x338babf/0x362f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3293015 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b4af1c00 session 0x5607b4150000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f535f000/0x0/0x4ffc00000, data 0x338babf/0x362f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174276608 unmapped: 50937856 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3293015 data_alloc: 218103808 data_used: 1839104
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.964478493s of 16.542390823s, submitted: 22
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b3270c00 session 0x5607b422e1e0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174596096 unmapped: 50618368 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174596096 unmapped: 50618368 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174596096 unmapped: 50618368 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174596096 unmapped: 50618368 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f533b000/0x0/0x4ffc00000, data 0x33afabf/0x3653000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 174596096 unmapped: 50618368 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339144 data_alloc: 218103808 data_used: 7802880
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175742976 unmapped: 49471488 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175742976 unmapped: 49471488 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175742976 unmapped: 49471488 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f533b000/0x0/0x4ffc00000, data 0x33afabf/0x3653000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175742976 unmapped: 49471488 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175742976 unmapped: 49471488 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3341384 data_alloc: 218103808 data_used: 8163328
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175742976 unmapped: 49471488 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175742976 unmapped: 49471488 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f533b000/0x0/0x4ffc00000, data 0x33afabf/0x3653000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175742976 unmapped: 49471488 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175742976 unmapped: 49471488 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f533b000/0x0/0x4ffc00000, data 0x33afabf/0x3653000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 175742976 unmapped: 49471488 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.843484879s of 14.869992256s, submitted: 7
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363560 data_alloc: 218103808 data_used: 8626176
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 183410688 unmapped: 41803776 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 185925632 unmapped: 39288832 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 180649984 unmapped: 44564480 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 180649984 unmapped: 44564480 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f3a86000/0x0/0x4ffc00000, data 0x3ac4abf/0x3d68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 180649984 unmapped: 44564480 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3409304 data_alloc: 218103808 data_used: 8949760
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 180649984 unmapped: 44564480 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 180649984 unmapped: 44564480 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 180649984 unmapped: 44564480 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 180649984 unmapped: 44564480 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f3a86000/0x0/0x4ffc00000, data 0x3ac4abf/0x3d68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 180649984 unmapped: 44564480 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3409304 data_alloc: 218103808 data_used: 8949760
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.524730682s of 10.805479050s, submitted: 72
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 180649984 unmapped: 44564480 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b3296000 session 0x5607b3316780
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b4198c00 session 0x5607b48e0f00
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 ms_handle_reset con 0x5607b2703400 session 0x5607b248a000
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f3aaa000/0x0/0x4ffc00000, data 0x3aa0abf/0x3d44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398744 data_alloc: 218103808 data_used: 8839168
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 495 handle_osd_map epochs [496,496], i have 495, src has [1,496]
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 ms_handle_reset con 0x5607b4f26800 session 0x5607b3fa9a40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 ms_handle_reset con 0x5607b2703400 session 0x5607b3fa8b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa5000/0x0/0x4ffc00000, data 0x3aa264c/0x3d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3404746 data_alloc: 218103808 data_used: 8847360
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa5000/0x0/0x4ffc00000, data 0x3aa264c/0x3d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3404746 data_alloc: 218103808 data_used: 8847360
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa5000/0x0/0x4ffc00000, data 0x3aa264c/0x3d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa5000/0x0/0x4ffc00000, data 0x3aa264c/0x3d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa5000/0x0/0x4ffc00000, data 0x3aa264c/0x3d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.208564758s of 19.286630630s, submitted: 28
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 ms_handle_reset con 0x5607b3270c00 session 0x5607b3fa83c0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405187 data_alloc: 218103808 data_used: 8847360
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa6000/0x0/0x4ffc00000, data 0x3aa264c/0x3d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 43499520 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa6000/0x0/0x4ffc00000, data 0x3aa264c/0x3d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181731328 unmapped: 43483136 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406919 data_alloc: 218103808 data_used: 9027584
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181731328 unmapped: 43483136 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181731328 unmapped: 43483136 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181731328 unmapped: 43483136 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181731328 unmapped: 43483136 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181731328 unmapped: 43483136 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406919 data_alloc: 218103808 data_used: 9027584
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa6000/0x0/0x4ffc00000, data 0x3aa264c/0x3d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181731328 unmapped: 43483136 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181731328 unmapped: 43483136 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181731328 unmapped: 43483136 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 181731328 unmapped: 43483136 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa6000/0x0/0x4ffc00000, data 0x3aa264c/0x3d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 183566336 unmapped: 41648128 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422279 data_alloc: 234881024 data_used: 13078528
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 183738368 unmapped: 41476096 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa6000/0x0/0x4ffc00000, data 0x3aa264c/0x3d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 183738368 unmapped: 41476096 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 183738368 unmapped: 41476096 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 183738368 unmapped: 41476096 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 183738368 unmapped: 41476096 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436199 data_alloc: 234881024 data_used: 17747968
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa6000/0x0/0x4ffc00000, data 0x3aa264c/0x3d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 187932672 unmapped: 37281792 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa6000/0x0/0x4ffc00000, data 0x3aa264c/0x3d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 187932672 unmapped: 37281792 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 187932672 unmapped: 37281792 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 187932672 unmapped: 37281792 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 187932672 unmapped: 37281792 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436199 data_alloc: 234881024 data_used: 17747968
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa6000/0x0/0x4ffc00000, data 0x3aa264c/0x3d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 187932672 unmapped: 37281792 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 187932672 unmapped: 37281792 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa6000/0x0/0x4ffc00000, data 0x3aa264c/0x3d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 187932672 unmapped: 37281792 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 187932672 unmapped: 37281792 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.869714737s of 29.923021317s, submitted: 6
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 188997632 unmapped: 36216832 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464247 data_alloc: 234881024 data_used: 17747968
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 188997632 unmapped: 36216832 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 188997632 unmapped: 36216832 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f36a6000/0x0/0x4ffc00000, data 0x3ea264c/0x4148000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 188997632 unmapped: 36216832 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f36a6000/0x0/0x4ffc00000, data 0x3ea264c/0x4148000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 188997632 unmapped: 36216832 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 188997632 unmapped: 36216832 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464247 data_alloc: 234881024 data_used: 17747968
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 188997632 unmapped: 36216832 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 188997632 unmapped: 36216832 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 188997632 unmapped: 36216832 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f36a6000/0x0/0x4ffc00000, data 0x3ea264c/0x4148000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 188997632 unmapped: 36216832 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 ms_handle_reset con 0x5607b3296000 session 0x5607b32e5680
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 ms_handle_reset con 0x5607b4198c00 session 0x5607b422e960
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.161155701s of 10.254331589s, submitted: 2
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189194240 unmapped: 36020224 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 ms_handle_reset con 0x5607b4f26800 session 0x5607b32e4b40
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469923 data_alloc: 234881024 data_used: 19005440
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189194240 unmapped: 36020224 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189194240 unmapped: 36020224 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f36a6000/0x0/0x4ffc00000, data 0x3ea264c/0x4148000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f36a6000/0x0/0x4ffc00000, data 0x3ea264c/0x4148000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189194240 unmapped: 36020224 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189194240 unmapped: 36020224 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 ms_handle_reset con 0x5607b2703400 session 0x5607b825bc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469043 data_alloc: 234881024 data_used: 19001344
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f36a7000/0x0/0x4ffc00000, data 0x3ea263c/0x4147000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 ms_handle_reset con 0x5607b3270c00 session 0x5607b44afc20
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa7000/0x0/0x4ffc00000, data 0x3aa263c/0x3d47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 ms_handle_reset con 0x5607b3296000 session 0x5607b33154a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 ms_handle_reset con 0x5607b4198c00 session 0x5607b4a785a0
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189210624 unmapped: 36003840 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189202432 unmapped: 36012032 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189202432 unmapped: 36012032 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189202432 unmapped: 36012032 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189202432 unmapped: 36012032 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189202432 unmapped: 36012032 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189202432 unmapped: 36012032 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189202432 unmapped: 36012032 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189202432 unmapped: 36012032 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: bluestore.MempoolThread(0x5607b0cbfb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435451 data_alloc: 234881024 data_used: 17846272
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: do_command 'config diff' '{prefix=config diff}'
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189267968 unmapped: 35946496 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: do_command 'config show' '{prefix=config show}'
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f3aa8000/0x0/0x4ffc00000, data 0x3aa262c/0x3d46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189079552 unmapped: 36134912 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: prioritycache tune_memory target: 4294967296 mapped: 189128704 unmapped: 36085760 heap: 225214464 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:05 np0005480824 ceph-osd[90443]: do_command 'log dump' '{prefix=log dump}'
Oct 11 00:10:05 np0005480824 nova_compute[260089]: 2025-10-11 04:10:05.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:10:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 11 00:10:05 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3003673069' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 00:10:05 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19255 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:10:05 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader).osd e496 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 00:10:05 np0005480824 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 00:10:06 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19259 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 00:10:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 00:10:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771841828' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 00:10:06 np0005480824 nova_compute[260089]: 2025-10-11 04:10:06.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 11 00:10:06 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19261 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 00:10:06 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 11 00:10:06 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3695969036' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 00:10:06 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19265 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 00:10:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 11 00:10:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3670387411' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 11 00:10:07 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:10:07 np0005480824 nova_compute[260089]: 2025-10-11 04:10:07.296 2 DEBUG oslo_service.periodic_task [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 11 00:10:07 np0005480824 nova_compute[260089]: 2025-10-11 04:10:07.297 2 DEBUG nova.compute.manager [None req-d79b6b22-ab5a-41f8-9631-0f44ba1473d0 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 11 00:10:07 np0005480824 ceph-mgr[74617]: log_channel(audit) log [DBG] : from='client.19273 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 00:10:07 np0005480824 ceph-mgr[74617]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 00:10:07 np0005480824 ceph-92cfe4d4-4917-5be1-9d00-73758793a62b-mgr-compute-0-pdyrua[74613]: 2025-10-11T04:10:07.623+0000 7fb7c0b48640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 00:10:07 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 11 00:10:07 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619936741' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 11 00:10:08 np0005480824 podman[311113]: 2025-10-11 04:10:08.030419551 +0000 UTC m=+0.091680777 container health_status dd5285a58cbe29a90687a00af14b934b599bb4de55df5857e4d7b7ac9f22feab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c4b77291aeca5591ac860bd4127cec2f, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/237046074' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3653437337' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3259685094' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/279611031' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:10:08 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 7f44f889-ab72-495a-9c55-5d8cc2c1362f does not exist
Oct 11 00:10:08 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev 9e88898f-8143-4b26-974a-4e65961f2e46 does not exist
Oct 11 00:10:08 np0005480824 ceph-mgr[74617]: [progress WARNING root] complete: ev aaf3de76-6677-4ed5-8486-371770ce8943 does not exist
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' 
Oct 11 00:10:08 np0005480824 ceph-mon[74326]: from='mgr.14130 192.168.122.100:0/3311184781' entity='mgr.compute-0.pdyrua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 00:10:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 11 00:10:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3181432398' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 11 00:10:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 11 00:10:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1427360136' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 11 00:10:09 np0005480824 ceph-mgr[74617]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Oct 11 00:10:09 np0005480824 podman[311667]: 2025-10-11 04:10:09.315785625 +0000 UTC m=+0.045491575 container create a951a2e16f8bac3ad366f5ec9c799143300fdf5586a1936b55b572fea34f028b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 00:10:09 np0005480824 systemd[1]: Started libpod-conmon-a951a2e16f8bac3ad366f5ec9c799143300fdf5586a1936b55b572fea34f028b.scope.
Oct 11 00:10:09 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:10:09 np0005480824 podman[311667]: 2025-10-11 04:10:09.296108709 +0000 UTC m=+0.025814679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:10:09 np0005480824 podman[311667]: 2025-10-11 04:10:09.401279008 +0000 UTC m=+0.130984998 container init a951a2e16f8bac3ad366f5ec9c799143300fdf5586a1936b55b572fea34f028b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 00:10:09 np0005480824 podman[311667]: 2025-10-11 04:10:09.412651083 +0000 UTC m=+0.142357033 container start a951a2e16f8bac3ad366f5ec9c799143300fdf5586a1936b55b572fea34f028b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 00:10:09 np0005480824 podman[311667]: 2025-10-11 04:10:09.416120923 +0000 UTC m=+0.145826913 container attach a951a2e16f8bac3ad366f5ec9c799143300fdf5586a1936b55b572fea34f028b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 00:10:09 np0005480824 sharp_elion[311690]: 167 167
Oct 11 00:10:09 np0005480824 systemd[1]: libpod-a951a2e16f8bac3ad366f5ec9c799143300fdf5586a1936b55b572fea34f028b.scope: Deactivated successfully.
Oct 11 00:10:09 np0005480824 conmon[311690]: conmon a951a2e16f8bac3ad366 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a951a2e16f8bac3ad366f5ec9c799143300fdf5586a1936b55b572fea34f028b.scope/container/memory.events
Oct 11 00:10:09 np0005480824 podman[311667]: 2025-10-11 04:10:09.420890273 +0000 UTC m=+0.150596233 container died a951a2e16f8bac3ad366f5ec9c799143300fdf5586a1936b55b572fea34f028b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 00:10:09 np0005480824 systemd[1]: var-lib-containers-storage-overlay-b40b9c24524738abc10713486ef857121a244d22e98f8306baef2d49a2965de8-merged.mount: Deactivated successfully.
Oct 11 00:10:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 11 00:10:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/73628516' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 11 00:10:09 np0005480824 podman[311667]: 2025-10-11 04:10:09.474269792 +0000 UTC m=+0.203975752 container remove a951a2e16f8bac3ad366f5ec9c799143300fdf5586a1936b55b572fea34f028b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 00:10:09 np0005480824 systemd[1]: libpod-conmon-a951a2e16f8bac3ad366f5ec9c799143300fdf5586a1936b55b572fea34f028b.scope: Deactivated successfully.
Oct 11 00:10:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 11 00:10:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4219308096' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 11 00:10:09 np0005480824 podman[311757]: 2025-10-11 04:10:09.696777794 +0000 UTC m=+0.068911461 container create 403ccab3c2834bc4ead6bf83bde531539fa7fd7815de723601975f1045d6a5c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbdde18000 session 0x55dbe01083c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbdffff800 session 0x55dbde870960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f9858000/0x0/0x4ffc00000, data 0x1d2171e/0x1e16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 14884864 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309166 data_alloc: 234881024 data_used: 12283904
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbe0783c00 session 0x55dbe0d58000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f9858000/0x0/0x4ffc00000, data 0x1d2171e/0x1e16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 14884864 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0d590e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 14884864 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbddd4d400 session 0x55dbdfe6c000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 14876672 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.516496658s of 10.756332397s, submitted: 47
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbdde18000 session 0x55dbe072cf00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbdffff800 session 0x55dbdfe5e780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbe0d5a400 session 0x55dbddd1b0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde876d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f9858000/0x0/0x4ffc00000, data 0x1d2171e/0x1e16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbddd4d400 session 0x55dbde800000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbe0d5a000 session 0x55dbdfe61a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbdde18000 session 0x55dbe01092c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbdffff800 session 0x55dbe0485e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdfe5ef00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbddd4d400 session 0x55dbdfe5fc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbdde18000 session 0x55dbdeb46000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102531072 unmapped: 14811136 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 heartbeat osd_stat(store_statfs(0x4fa327000/0x0/0x4ffc00000, data 0x125271e/0x1347000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102531072 unmapped: 14811136 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252930 data_alloc: 234881024 data_used: 12283904
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102531072 unmapped: 14811136 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102531072 unmapped: 14811136 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 heartbeat osd_stat(store_statfs(0x4fa11d000/0x0/0x4ffc00000, data 0x145c71e/0x1551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102531072 unmapped: 14811136 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102531072 unmapped: 14811136 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 heartbeat osd_stat(store_statfs(0x4fa11d000/0x0/0x4ffc00000, data 0x145c71e/0x1551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbdffff800 session 0x55dbdfe6d2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 14802944 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252930 data_alloc: 234881024 data_used: 12283904
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbe0d5a000 session 0x55dbe0751a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 14802944 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdde012c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbddd4d400 session 0x55dbde7b2b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102588416 unmapped: 14753792 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102752256 unmapped: 14589952 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102825984 unmapped: 14516224 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.362665176s of 11.514065742s, submitted: 37
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbdde18000 session 0x55dbdec532c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbdffff800 session 0x55dbdfe60960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 heartbeat osd_stat(store_statfs(0x4fa11c000/0x0/0x4ffc00000, data 0x145c72e/0x1552000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102825984 unmapped: 14516224 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbe0d5a800 session 0x55dbe0d33a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236363 data_alloc: 234881024 data_used: 12283904
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102400000 unmapped: 14942208 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102400000 unmapped: 14942208 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102400000 unmapped: 14942208 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102400000 unmapped: 14942208 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde8001e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102400000 unmapped: 14942208 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 heartbeat osd_stat(store_statfs(0x4fa32b000/0x0/0x4ffc00000, data 0x124c790/0x1343000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239973 data_alloc: 234881024 data_used: 12283904
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 ms_handle_reset con 0x55dbddd4d400 session 0x55dbde8005a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102424576 unmapped: 14917632 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 ms_handle_reset con 0x55dbdde18000 session 0x55dbdeb46000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 ms_handle_reset con 0x55dbdffff800 session 0x55dbdfe5e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 ms_handle_reset con 0x55dbe0d5ac00 session 0x55dbdfe5e780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102408192 unmapped: 14934016 heap: 117342208 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 ms_handle_reset con 0x55dbddd4d400 session 0x55dbe06c8960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 ms_handle_reset con 0x55dbdde18000 session 0x55dbde8372c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdfe5ef00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 ms_handle_reset con 0x55dbdffff800 session 0x55dbde8765a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 18448384 heap: 121020416 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f9792000/0x0/0x4ffc00000, data 0x1de331d/0x1edc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 ms_handle_reset con 0x55dbe0d5b000 session 0x55dbdfe5fc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe072cf00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 ms_handle_reset con 0x55dbddd4d400 session 0x55dbe0d590e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 17416192 heap: 121020416 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 ms_handle_reset con 0x55dbdde18000 session 0x55dbe0d32000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 ms_handle_reset con 0x55dbdffff800 session 0x55dbe06c94a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x275f37f/0x2859000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.834914207s of 10.171888351s, submitted: 99
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 17391616 heap: 121020416 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420588 data_alloc: 234881024 data_used: 12300288
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbe0d5b800 session 0x55dbddeda960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 9396224 heap: 121020416 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbddd4d400 session 0x55dbe0d3a1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbdde18000 session 0x55dbe0d3a960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 113664000 unmapped: 32571392 heap: 146235392 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbdffff800 session 0x55dbe0d3b0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 104808448 unmapped: 41426944 heap: 146235392 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f060d000/0x0/0x4ffc00000, data 0xaf62adc/0xb060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 109674496 unmapped: 36560896 heap: 146235392 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 118702080 unmapped: 27533312 heap: 146235392 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745356 data_alloc: 234881024 data_used: 19529728
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 28917760 heap: 146235392 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde879860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbe27f8400 session 0x55dbe0d3be00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0d8e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbe0d5b400 session 0x55dbde8374a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbddd4d400 session 0x55dbdfe605a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbe27f8000 session 0x55dbdfe6d2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbe0d5b800 session 0x55dbe0d3b860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 heartbeat osd_stat(store_statfs(0x4e120d000/0x0/0x4ffc00000, data 0x1a362fcb/0x1a461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbdde18000 session 0x55dbe0d2a960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 21315584 heap: 146235392 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbddd4d400 session 0x55dbe0d8fa40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbe0d5b400 session 0x55dbddd1a1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde6c2d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 ms_handle_reset con 0x55dbe27f8000 session 0x55dbe06c8960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 29016064 heap: 146235392 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbe27f8000 session 0x55dbe0d33a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0d590e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbddd4d400 session 0x55dbdfe5e780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 29401088 heap: 146235392 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.320829391s of 10.190688133s, submitted: 281
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbe0d5bc00 session 0x55dbdde485a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 29401088 heap: 146235392 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1375520 data_alloc: 234881024 data_used: 12320768
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbdde18000 session 0x55dbdd930f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 35102720 heap: 146235392 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbddd4d400 session 0x55dbddff3c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbdde18000 session 0x55dbdddda960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0d592c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 heartbeat osd_stat(store_statfs(0x4fa31e000/0x0/0x4ffc00000, data 0x1253664/0x134f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 111386624 unmapped: 34848768 heap: 146235392 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbe27f8800 session 0x55dbddddb680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbe27f8c00 session 0x55dbe06c85a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 34332672 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 42721280 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe06c9c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbddd4d400 session 0x55dbde870780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbdde18000 session 0x55dbe06c8000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 34316288 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbe27f8800 session 0x55dbde6c3860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1793239 data_alloc: 234881024 data_used: 12324864
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbe27f9000 session 0x55dbe0d581e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbe27f9000 session 0x55dbdfe6d2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe01090e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 112082944 unmapped: 42549248 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6b1e000/0x0/0x4ffc00000, data 0x4a5368e/0x4b50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 42541056 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbddd4d400 session 0x55dbddc28960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 ms_handle_reset con 0x55dbdde18000 session 0x55dbe0cd81e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 42516480 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f2054000/0x0/0x4ffc00000, data 0x951d696/0x961a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbe27f8800 session 0x55dbe06c8960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 42500096 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.869544029s of 10.014250755s, submitted: 197
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 42500096 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe06c9c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbddd4d400 session 0x55dbdddda960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2558222 data_alloc: 234881024 data_used: 12333056
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 42541056 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbdde18000 session 0x55dbe0d590e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbe27f9000 session 0x55dbe0d592c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbe27f9400 session 0x55dbddff3c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdd930f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbddd4d400 session 0x55dbdde485a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbdde18000 session 0x55dbe0d8fa40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbe27f9000 session 0x55dbde876b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 112197632 unmapped: 42434560 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbe27f9800 session 0x55dbde8763c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 42311680 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0750b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 112410624 unmapped: 42221568 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 heartbeat osd_stat(store_statfs(0x4e98a5000/0x0/0x4ffc00000, data 0x118ba16b/0x119b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbddd4d400 session 0x55dbe0751680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 33865728 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3312767 data_alloc: 234881024 data_used: 12333056
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 41197568 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbdde18000 session 0x55dbe0750d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 ms_handle_reset con 0x55dbe27f9000 session 0x55dbe07510e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 113516544 unmapped: 41115648 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 41320448 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 41394176 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 41328640 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 heartbeat osd_stat(store_statfs(0x4e40a5000/0x0/0x4ffc00000, data 0x170ba16b/0x171b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3829516 data_alloc: 234881024 data_used: 16101376
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.931195259s of 11.010159492s, submitted: 162
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 41246720 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 172 ms_handle_reset con 0x55dbdffff800 session 0x55dbe07de5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 41230336 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 41230336 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 41238528 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 172 handle_osd_map epochs [172,173], i have 172, src has [1,173]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 173 heartbeat osd_stat(store_statfs(0x4e40a2000/0x0/0x4ffc00000, data 0x170bbce8/0x171bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 113508352 unmapped: 41123840 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 173 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0dae5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1584648 data_alloc: 234881024 data_used: 16117760
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 173 ms_handle_reset con 0x55dbddd4d400 session 0x55dbde8772c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 173 ms_handle_reset con 0x55dbdde18000 session 0x55dbdec523c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 114540544 unmapped: 40091648 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f689e000/0x0/0x4ffc00000, data 0x20bd877/0x21be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 114540544 unmapped: 40091648 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 114540544 unmapped: 40091648 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 114540544 unmapped: 40091648 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f689e000/0x0/0x4ffc00000, data 0x20bd877/0x21be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 40083456 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1655044 data_alloc: 234881024 data_used: 16125952
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.703844070s of 10.314574242s, submitted: 207
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 118628352 unmapped: 36003840 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 174 ms_handle_reset con 0x55dbdfef7400 session 0x55dbdfe612c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 120152064 unmapped: 34480128 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 120152064 unmapped: 34480128 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 174 ms_handle_reset con 0x55dbdfef7c00 session 0x55dbdeb252c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 174 handle_osd_map epochs [174,175], i have 174, src has [1,175]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 175 ms_handle_reset con 0x55dbddd4d400 session 0x55dbdeb62f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 34455552 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 175 handle_osd_map epochs [175,176], i have 175, src has [1,176]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 176 ms_handle_reset con 0x55dbddd4d000 session 0x55dbddc292c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 176 ms_handle_reset con 0x55dbdfef7800 session 0x55dbde8710e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 34398208 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f823a000/0x0/0x4ffc00000, data 0x2f00ab8/0x3009000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1716958 data_alloc: 234881024 data_used: 16281600
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 34398208 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 176 ms_handle_reset con 0x55dbdde18000 session 0x55dbe010bc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 34398208 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 119988224 unmapped: 34643968 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 176 ms_handle_reset con 0x55dbdfef7c00 session 0x55dbe07def00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 176 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdfe62780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 176 ms_handle_reset con 0x55dbddd4d400 session 0x55dbdefff680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f8252000/0x0/0x4ffc00000, data 0x2f03ab8/0x300c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 177 ms_handle_reset con 0x55dbdde18000 session 0x55dbe07ded20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 177 ms_handle_reset con 0x55dbdfef7800 session 0x55dbe0d33c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 177 ms_handle_reset con 0x55dbdfef7400 session 0x55dbe0dae3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 119996416 unmapped: 34635776 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 119996416 unmapped: 34635776 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 177 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe07dfc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 177 ms_handle_reset con 0x55dbdde18000 session 0x55dbe0daf860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1717879 data_alloc: 234881024 data_used: 16302080
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 177 handle_osd_map epochs [177,178], i have 177, src has [1,178]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 178 ms_handle_reset con 0x55dbddd4d400 session 0x55dbe0d32d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 178 heartbeat osd_stat(store_statfs(0x4f824c000/0x0/0x4ffc00000, data 0x2f056f9/0x3011000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 119996416 unmapped: 34635776 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.827223778s of 10.049263000s, submitted: 78
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 178 ms_handle_reset con 0x55dbdfef7800 session 0x55dbe0d32f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 34627584 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 179 ms_handle_reset con 0x55dbdfef7c00 session 0x55dbe0d32960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 119250944 unmapped: 35381248 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 179 handle_osd_map epochs [179,180], i have 179, src has [1,180]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdec53860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 ms_handle_reset con 0x55dbdfef7000 session 0x55dbe0d58000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 ms_handle_reset con 0x55dbe0548c00 session 0x55dbe0cd8d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 ms_handle_reset con 0x55dbe0548800 session 0x55dbddc9b2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 ms_handle_reset con 0x55dbe0549c00 session 0x55dbddc9b4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 ms_handle_reset con 0x55dbddd4d000 session 0x55dbddc9a5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 118784000 unmapped: 35848192 heap: 154632192 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 ms_handle_reset con 0x55dbdfef7000 session 0x55dbde7b34a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 ms_handle_reset con 0x55dbe0548800 session 0x55dbde7b30e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 ms_handle_reset con 0x55dbe0548c00 session 0x55dbe0c86780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 ms_handle_reset con 0x55dbe0549800 session 0x55dbddc12000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 heartbeat osd_stat(store_statfs(0x4f8247000/0x0/0x4ffc00000, data 0x2f0a95e/0x3015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 ms_handle_reset con 0x55dbddd4d000 session 0x55dbddc130e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 ms_handle_reset con 0x55dbdfef7000 session 0x55dbddddbc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 ms_handle_reset con 0x55dbe0548800 session 0x55dbdddda000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 118661120 unmapped: 39649280 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 181 ms_handle_reset con 0x55dbe0548c00 session 0x55dbde8012c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1863759 data_alloc: 234881024 data_used: 16306176
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 39583744 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 181 handle_osd_map epochs [182,182], i have 181, src has [1,182]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 182 ms_handle_reset con 0x55dbe06cb800 session 0x55dbe0d32000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 118644736 unmapped: 39665664 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 182 ms_handle_reset con 0x55dbe0548800 session 0x55dbde878f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 182 ms_handle_reset con 0x55dbdfef7000 session 0x55dbdeb465a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 183 ms_handle_reset con 0x55dbe0d5a400 session 0x55dbe010b0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 183 heartbeat osd_stat(store_statfs(0x4f7176000/0x0/0x4ffc00000, data 0x3fd713a/0x40e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 183 ms_handle_reset con 0x55dbe0d5b400 session 0x55dbe0cd8d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 39641088 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 184 ms_handle_reset con 0x55dbe0548c00 session 0x55dbde87c1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 184 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbdec53860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 184 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde878b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 184 ms_handle_reset con 0x55dbdfef7000 session 0x55dbdec52780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 184 ms_handle_reset con 0x55dbe0548800 session 0x55dbe0d330e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 118693888 unmapped: 39616512 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 184 heartbeat osd_stat(store_statfs(0x4f716e000/0x0/0x4ffc00000, data 0x3fdab02/0x40ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 184 ms_handle_reset con 0x55dbe0d5a400 session 0x55dbdeb46000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 39567360 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 185 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdde00000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1876114 data_alloc: 234881024 data_used: 16330752
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 185 ms_handle_reset con 0x55dbe0d5b400 session 0x55dbe0c87860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 185 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbdde49a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 39854080 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 185 handle_osd_map epochs [185,186], i have 185, src has [1,186]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.652009964s of 10.314099312s, submitted: 189
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 186 ms_handle_reset con 0x55dbe0548800 session 0x55dbde8005a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 186 heartbeat osd_stat(store_statfs(0x4f716d000/0x0/0x4ffc00000, data 0x3fdc847/0x40f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 131424256 unmapped: 26886144 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 187 ms_handle_reset con 0x55dbe3247000 session 0x55dbe0cd8b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 187 ms_handle_reset con 0x55dbdfef7400 session 0x55dbdddda3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 187 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdfe61680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 187 ms_handle_reset con 0x55dbe27f9c00 session 0x55dbe010a3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 131440640 unmapped: 26869760 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 187 handle_osd_map epochs [187,188], i have 187, src has [1,188]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 188 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbe07de960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 130121728 unmapped: 28188672 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 130121728 unmapped: 28188672 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 188 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x2e05a2e/0x2f19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1834640 data_alloc: 251658240 data_used: 28655616
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 130121728 unmapped: 28188672 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 130121728 unmapped: 28188672 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 188 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x2e05a2e/0x2f19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 130121728 unmapped: 28188672 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 188 handle_osd_map epochs [188,189], i have 188, src has [1,189]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 130121728 unmapped: 28188672 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 189 ms_handle_reset con 0x55dbe0548800 session 0x55dbddd1a000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 189 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0cd8000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 130146304 unmapped: 28164096 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1762686 data_alloc: 251658240 data_used: 28655616
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 130146304 unmapped: 28164096 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.741812706s of 10.083634377s, submitted: 139
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 190 ms_handle_reset con 0x55dbdfef7400 session 0x55dbde87c1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 135831552 unmapped: 22478848 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 190 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbe0c87860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 191 heartbeat osd_stat(store_statfs(0x4f7f6c000/0x0/0x4ffc00000, data 0x31d5066/0x32eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 191 ms_handle_reset con 0x55dbe27f9c00 session 0x55dbe07dfa40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 23134208 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 23134208 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 191 ms_handle_reset con 0x55dbe3244800 session 0x55dbe06c9c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 191 handle_osd_map epochs [191,192], i have 191, src has [1,192]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 192 ms_handle_reset con 0x55dbe0d5b400 session 0x55dbe0d8e3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 192 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0d59680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 136192000 unmapped: 22118400 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f7ed5000/0x0/0x4ffc00000, data 0x3264832/0x337e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1906496 data_alloc: 251658240 data_used: 30179328
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f7ed5000/0x0/0x4ffc00000, data 0x3264832/0x337e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 192 ms_handle_reset con 0x55dbdfef7400 session 0x55dbe0751c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 22069248 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 22069248 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 192 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbe0daef00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 22061056 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 ms_handle_reset con 0x55dbe27f9c00 session 0x55dbe0dae3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde870780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 ms_handle_reset con 0x55dbdfef7400 session 0x55dbdd9310e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 136953856 unmapped: 21356544 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 heartbeat osd_stat(store_statfs(0x4f7607000/0x0/0x4ffc00000, data 0x3b3b295/0x3c56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 136953856 unmapped: 21356544 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1975211 data_alloc: 251658240 data_used: 30187520
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbe0d59c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 ms_handle_reset con 0x55dbe3244c00 session 0x55dbe0108d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 ms_handle_reset con 0x55dbe3245000 session 0x55dbe07deb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0485680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 ms_handle_reset con 0x55dbe0d5b400 session 0x55dbe0d33680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 ms_handle_reset con 0x55dbdfef7400 session 0x55dbddc9a1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbde7b25a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 21831680 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 heartbeat osd_stat(store_statfs(0x4f79dd000/0x0/0x4ffc00000, data 0x32d0295/0x33eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 21815296 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 21815296 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 heartbeat osd_stat(store_statfs(0x4f765d000/0x0/0x4ffc00000, data 0x3650295/0x376b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 21798912 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 21798912 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.242119789s of 14.030727386s, submitted: 205
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 ms_handle_reset con 0x55dbe3244c00 session 0x55dbdeb62f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1939005 data_alloc: 251658240 data_used: 30187520
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe010a000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 136044544 unmapped: 22265856 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 136052736 unmapped: 22257664 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 heartbeat osd_stat(store_statfs(0x4f7af2000/0x0/0x4ffc00000, data 0x36502b8/0x376c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 194 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbde806f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 136151040 unmapped: 22159360 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 195 ms_handle_reset con 0x55dbe3245400 session 0x55dbdfe5f4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 195 heartbeat osd_stat(store_statfs(0x4f7ae9000/0x0/0x4ffc00000, data 0x3653a14/0x3773000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 20570112 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 195 ms_handle_reset con 0x55dbe3245800 session 0x55dbde6c3860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 195 handle_osd_map epochs [195,196], i have 195, src has [1,196]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 196 ms_handle_reset con 0x55dbe3245c00 session 0x55dbde65e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 196 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde8374a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 20520960 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1985867 data_alloc: 251658240 data_used: 33964032
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 197 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbddeda1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 197 ms_handle_reset con 0x55dbe3245400 session 0x55dbe0d323c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 197 ms_handle_reset con 0x55dbe3245800 session 0x55dbdde201e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 20398080 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f7ae5000/0x0/0x4ffc00000, data 0x3657154/0x3778000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 137961472 unmapped: 20348928 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 197 ms_handle_reset con 0x55dbe00c0000 session 0x55dbe0d33e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 137961472 unmapped: 20348928 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 197 handle_osd_map epochs [197,198], i have 197, src has [1,198]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 198 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0d4dc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 198 heartbeat osd_stat(store_statfs(0x4f7ae1000/0x0/0x4ffc00000, data 0x365a164/0x377c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 137994240 unmapped: 20316160 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 198 ms_handle_reset con 0x55dbe3245800 session 0x55dbdde49680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 198 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbe0d183c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 138035200 unmapped: 20275200 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1990434 data_alloc: 251658240 data_used: 33976320
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 138035200 unmapped: 20275200 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 198 ms_handle_reset con 0x55dbe3245400 session 0x55dbe0d3b4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 198 ms_handle_reset con 0x55dbdf0c9000 session 0x55dbdfe6da40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 198 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0d58b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 198 ms_handle_reset con 0x55dbe3245400 session 0x55dbde65fc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.073848724s of 11.429707527s, submitted: 102
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 138035200 unmapped: 20275200 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 199 ms_handle_reset con 0x55dbe3245800 session 0x55dbdde492c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 199 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x365bd0d/0x3780000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 199 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbdfe6c5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 199 handle_osd_map epochs [199,200], i have 199, src has [1,200]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 138084352 unmapped: 20226048 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 200 ms_handle_reset con 0x55dbdffff400 session 0x55dbe0d2a000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 200 ms_handle_reset con 0x55dbddd4d000 session 0x55dbddff2f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 200 ms_handle_reset con 0x55dbe3245400 session 0x55dbdde010e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 137936896 unmapped: 20373504 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 201 ms_handle_reset con 0x55dbe3245800 session 0x55dbe07ded20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 138387456 unmapped: 19922944 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2034427 data_alloc: 251658240 data_used: 34025472
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 201 handle_osd_map epochs [201,202], i have 201, src has [1,202]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 202 ms_handle_reset con 0x55dbe27f9000 session 0x55dbde7b2b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 202 heartbeat osd_stat(store_statfs(0x4f76b6000/0x0/0x4ffc00000, data 0x3a7ff48/0x3ba7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 202 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbe0db2000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 138477568 unmapped: 19832832 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 202 heartbeat osd_stat(store_statfs(0x4f76ad000/0x0/0x4ffc00000, data 0x3a86ae1/0x3baf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 138485760 unmapped: 19824640 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 203 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbe0d2a780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 138493952 unmapped: 19816448 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 203 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0c865a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 203 heartbeat osd_stat(store_statfs(0x4f76ab000/0x0/0x4ffc00000, data 0x3a886ce/0x3bb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 203 ms_handle_reset con 0x55dbdffff800 session 0x55dbe0c87680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 138518528 unmapped: 19791872 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 203 ms_handle_reset con 0x55dbdfef7000 session 0x55dbe0d33c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 138657792 unmapped: 19652608 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 203 ms_handle_reset con 0x55dbe3245800 session 0x55dbe072cb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1684296 data_alloc: 234881024 data_used: 16773120
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 203 ms_handle_reset con 0x55dbe27f9000 session 0x55dbde87d860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 204 ms_handle_reset con 0x55dbe3245800 session 0x55dbe0c87a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 126394368 unmapped: 31916032 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 204 heartbeat osd_stat(store_statfs(0x4f9698000/0x0/0x4ffc00000, data 0x1a982a2/0x1bc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 205 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdddc5a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.392159462s of 10.078720093s, submitted: 229
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 205 ms_handle_reset con 0x55dbe3245400 session 0x55dbe0d58000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 31432704 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 206 ms_handle_reset con 0x55dbdffff800 session 0x55dbdec525a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 31334400 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 207 ms_handle_reset con 0x55dbdffff800 session 0x55dbde806f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127016960 unmapped: 31293440 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 207 heartbeat osd_stat(store_statfs(0x4f9692000/0x0/0x4ffc00000, data 0x1a9baa6/0x1bcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 207 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0751a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 207 ms_handle_reset con 0x55dbe27f9000 session 0x55dbdd283c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 208 ms_handle_reset con 0x55dbe3245400 session 0x55dbdec53c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127008768 unmapped: 31301632 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702067 data_alloc: 234881024 data_used: 16789504
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 208 ms_handle_reset con 0x55dbe3245800 session 0x55dbe06c9c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 209 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe06c8000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 31268864 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 209 ms_handle_reset con 0x55dbdffff800 session 0x55dbdfe621e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 209 heartbeat osd_stat(store_statfs(0x4f968c000/0x0/0x4ffc00000, data 0x1a9f2ff/0x1bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 209 ms_handle_reset con 0x55dbe27f9000 session 0x55dbde87cb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 209 ms_handle_reset con 0x55dbe3245400 session 0x55dbde8363c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 209 handle_osd_map epochs [209,210], i have 209, src has [1,210]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127049728 unmapped: 31260672 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 210 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbe0cd92c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 210 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0cd90e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 210 ms_handle_reset con 0x55dbdffff800 session 0x55dbe0cd8780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 31424512 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 210 ms_handle_reset con 0x55dbe27f9000 session 0x55dbddddab40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 210 handle_osd_map epochs [210,211], i have 210, src has [1,211]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 31408128 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 211 ms_handle_reset con 0x55dbe3245400 session 0x55dbddff2f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 211 ms_handle_reset con 0x55dbe27f8000 session 0x55dbdfe5ed20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 31408128 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 211 ms_handle_reset con 0x55dbdfef7000 session 0x55dbde8072c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704993 data_alloc: 234881024 data_used: 16338944
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 211 heartbeat osd_stat(store_statfs(0x4f9687000/0x0/0x4ffc00000, data 0x1aa4662/0x1bd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 211 ms_handle_reset con 0x55dbe0d5b400 session 0x55dbe0d2b0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 211 ms_handle_reset con 0x55dbdfef7400 session 0x55dbe07514a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 31399936 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 211 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0daf680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 211 heartbeat osd_stat(store_statfs(0x4f9689000/0x0/0x4ffc00000, data 0x1aa4662/0x1bd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 211 ms_handle_reset con 0x55dbdffff800 session 0x55dbde6c2d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.510210991s of 10.167940140s, submitted: 207
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 211 ms_handle_reset con 0x55dbddd4d000 session 0x55dbddc9a1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 125984768 unmapped: 32325632 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 211 ms_handle_reset con 0x55dbdfef7000 session 0x55dbde7b34a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 211 ms_handle_reset con 0x55dbdfef7400 session 0x55dbdfe60960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 211 ms_handle_reset con 0x55dbe0d5b400 session 0x55dbdfe612c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 211 handle_osd_map epochs [211,212], i have 211, src has [1,212]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 212 ms_handle_reset con 0x55dbe27f9000 session 0x55dbde6c2d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127008768 unmapped: 31301632 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127008768 unmapped: 31301632 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 212 ms_handle_reset con 0x55dbdfef7000 session 0x55dbde8772c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 212 ms_handle_reset con 0x55dbdfef7400 session 0x55dbde877c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 212 ms_handle_reset con 0x55dbe0d5b400 session 0x55dbdddda780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127033344 unmapped: 31277056 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1788992 data_alloc: 234881024 data_used: 12484608
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 213 ms_handle_reset con 0x55dbe3245400 session 0x55dbddddb680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 31432704 heap: 158310400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 214 ms_handle_reset con 0x55dbe04f5000 session 0x55dbdeffe780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 214 ms_handle_reset con 0x55dbdfef7000 session 0x55dbdde20d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 214 ms_handle_reset con 0x55dbe04f5400 session 0x55dbe0daed20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 214 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde6c3860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 214 ms_handle_reset con 0x55dbdfef7400 session 0x55dbdde201e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 214 ms_handle_reset con 0x55dbe0d5b400 session 0x55dbde6c21e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127836160 unmapped: 34152448 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 214 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0cd83c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 214 ms_handle_reset con 0x55dbdfef7000 session 0x55dbde879e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 214 heartbeat osd_stat(store_statfs(0x4f77c1000/0x0/0x4ffc00000, data 0x3552afe/0x368b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 214 ms_handle_reset con 0x55dbe04f5400 session 0x55dbde65f4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 214 ms_handle_reset con 0x55dbdfef7400 session 0x55dbde806f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 214 ms_handle_reset con 0x55dbe3245400 session 0x55dbddddab40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 34144256 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 13K writes, 53K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 3844 syncs, 3.39 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6135 writes, 24K keys, 6135 commit groups, 1.0 writes per commit group, ingest: 13.38 MB, 0.02 MB/s#012Interval WAL: 6135 writes, 2623 syncs, 2.34 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 34144256 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 215 heartbeat osd_stat(store_statfs(0x4f73bd000/0x0/0x4ffc00000, data 0x3955ba0/0x3a8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 215 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde87cb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 215 ms_handle_reset con 0x55dbdfef7400 session 0x55dbdfe5e000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 132571136 unmapped: 29417472 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 215 ms_handle_reset con 0x55dbe04f4000 session 0x55dbdddc5a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 215 ms_handle_reset con 0x55dbe04f5400 session 0x55dbe06c8000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1990760 data_alloc: 234881024 data_used: 12500992
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 215 handle_osd_map epochs [215,216], i have 215, src has [1,216]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 216 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 216 ms_handle_reset con 0x55dbe3245400 session 0x55dbdfe5fc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 216 ms_handle_reset con 0x55dbdfef7000 session 0x55dbde65e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 216 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe07de960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 34422784 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 216 ms_handle_reset con 0x55dbe04f4000 session 0x55dbe01083c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 216 ms_handle_reset con 0x55dbdfef7400 session 0x55dbe07dfa40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 216 handle_osd_map epochs [216,217], i have 216, src has [1,217]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 217 ms_handle_reset con 0x55dbe04f5400 session 0x55dbdfe605a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.354578018s of 10.046285629s, submitted: 182
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127582208 unmapped: 34406400 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 217 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdfe5e3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 217 ms_handle_reset con 0x55dbdfef7000 session 0x55dbde87cf00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 217 ms_handle_reset con 0x55dbdfef7400 session 0x55dbddff3c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 217 handle_osd_map epochs [217,218], i have 217, src has [1,218]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 218 handle_osd_map epochs [218,218], i have 218, src has [1,218]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 218 ms_handle_reset con 0x55dbe04f5400 session 0x55dbe07df680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 218 ms_handle_reset con 0x55dbe04f4000 session 0x55dbdde44b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 218 ms_handle_reset con 0x55dbe04f4c00 session 0x55dbddd1a000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127623168 unmapped: 34365440 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 218 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdddda960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 218 ms_handle_reset con 0x55dbdfef7000 session 0x55dbe0d594a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 218 heartbeat osd_stat(store_statfs(0x4f73b4000/0x0/0x4ffc00000, data 0x395c8e6/0x3a98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127631360 unmapped: 34357248 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 218 ms_handle_reset con 0x55dbde18f400 session 0x55dbdd282f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 219 ms_handle_reset con 0x55dbdfef7400 session 0x55dbdd931c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127705088 unmapped: 34283520 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: mgrc ms_handle_reset ms_handle_reset con 0x55dbdf2f4400
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3841581780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3841581780,v1:192.168.122.100:6801/3841581780]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: mgrc handle_mgr_configure stats_period=5
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1885725 data_alloc: 234881024 data_used: 12496896
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 220 ms_handle_reset con 0x55dbdfef7400 session 0x55dbe010ba40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 220 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbde87de00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 220 ms_handle_reset con 0x55dbde5f5000 session 0x55dbddc12960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 220 ms_handle_reset con 0x55dbde5f4800 session 0x55dbdfe6cb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 34127872 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 220 ms_handle_reset con 0x55dbde5f4800 session 0x55dbdfe60b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 220 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdeb25a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 ms_handle_reset con 0x55dbdfef7000 session 0x55dbdfe61680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0daf0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbddc9b2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127893504 unmapped: 34095104 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f7e12000/0x0/0x4ffc00000, data 0x2ef7cb5/0x3038000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 ms_handle_reset con 0x55dbde5f4800 session 0x55dbe07de960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbdfe5e000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 ms_handle_reset con 0x55dbdfef7400 session 0x55dbde65e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 ms_handle_reset con 0x55dbdfef7000 session 0x55dbe0d58000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 ms_handle_reset con 0x55dbdfef6c00 session 0x55dbde65f860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127344640 unmapped: 34643968 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdddda780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f9a2d000/0x0/0x4ffc00000, data 0x22ffcb5/0x2440000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127344640 unmapped: 34643968 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde878b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f9a2d000/0x0/0x4ffc00000, data 0x22ffcb5/0x2440000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 ms_handle_reset con 0x55dbdfef7400 session 0x55dbe0daf860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f9a2d000/0x0/0x4ffc00000, data 0x22ffcb5/0x2440000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127344640 unmapped: 34643968 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1813118 data_alloc: 234881024 data_used: 12513280
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 ms_handle_reset con 0x55dbdfef7400 session 0x55dbddc13e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 222 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0cd8b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127352832 unmapped: 34635776 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 223 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdde481e0
Oct 11 00:10:09 np0005480824 systemd[1]: Started libpod-conmon-403ccab3c2834bc4ead6bf83bde531539fa7fd7815de723601975f1045d6a5c7.scope.
Oct 11 00:10:09 np0005480824 podman[311757]: 2025-10-11 04:10:09.664253789 +0000 UTC m=+0.036387476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 223 ms_handle_reset con 0x55dbdfef6c00 session 0x55dbe0d583c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 223 ms_handle_reset con 0x55dbde5f4800 session 0x55dbe0cd9a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.336128235s of 10.005481720s, submitted: 170
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127352832 unmapped: 34635776 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 223 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdfe610e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 223 heartbeat osd_stat(store_statfs(0x4f9a25000/0x0/0x4ffc00000, data 0x230363b/0x2447000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 224 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdeffef00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127352832 unmapped: 34635776 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 224 ms_handle_reset con 0x55dbdfef6c00 session 0x55dbe010ab40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 224 ms_handle_reset con 0x55dbde5f4800 session 0x55dbe010b860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 224 ms_handle_reset con 0x55dbdfef7400 session 0x55dbde8785a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 224 ms_handle_reset con 0x55dbdfef7400 session 0x55dbde87d860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127401984 unmapped: 34586624 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 224 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe0d18d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 224 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0daf4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 224 ms_handle_reset con 0x55dbdfef6c00 session 0x55dbe0cd9e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127410176 unmapped: 34578432 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1822267 data_alloc: 234881024 data_used: 12521472
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 224 heartbeat osd_stat(store_statfs(0x4f9a27000/0x0/0x4ffc00000, data 0x2305090/0x2447000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 225 ms_handle_reset con 0x55dbdfef7000 session 0x55dbe0485e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 34570240 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 225 ms_handle_reset con 0x55dbde5f4800 session 0x55dbddc13a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 226 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe0cd94a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 34570240 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 226 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0cd8d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 34562048 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127434752 unmapped: 34553856 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 228 ms_handle_reset con 0x55dbdfef6c00 session 0x55dbddc294a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f9a1e000/0x0/0x4ffc00000, data 0x230a1c5/0x244e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f9a1e000/0x0/0x4ffc00000, data 0x230bd50/0x2450000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127483904 unmapped: 34504704 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1832577 data_alloc: 234881024 data_used: 12541952
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 228 ms_handle_reset con 0x55dbdfef7400 session 0x55dbdddc5a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 229 ms_handle_reset con 0x55dbdfef7000 session 0x55dbdddda780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 127500288 unmapped: 34488320 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 229 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe0109860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 229 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde65e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 229 ms_handle_reset con 0x55dbde5f4800 session 0x55dbe0d58000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.534396172s of 10.156115532s, submitted: 168
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 229 ms_handle_reset con 0x55dbdfef6c00 session 0x55dbdfe5e3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 33423360 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 229 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe07df680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 230 ms_handle_reset con 0x55dbddd4d000 session 0x55dbddc9b2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 33423360 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 33423360 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 230 ms_handle_reset con 0x55dbde5f4800 session 0x55dbdfe605a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 230 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbdfe614a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 230 handle_osd_map epochs [230,231], i have 230, src has [1,231]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 231 ms_handle_reset con 0x55dbdfef7000 session 0x55dbdfe61680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 231 ms_handle_reset con 0x55dbdfef7800 session 0x55dbde8783c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 33398784 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1909299 data_alloc: 234881024 data_used: 12562432
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 231 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdfe6cb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 231 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdd282780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 231 heartbeat osd_stat(store_statfs(0x4f9275000/0x0/0x4ffc00000, data 0x2aae1fa/0x2bf9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 231 ms_handle_reset con 0x55dbde5f4800 session 0x55dbdfe5e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 231 ms_handle_reset con 0x55dbe04f5c00 session 0x55dbddd1a000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 33341440 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 231 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbddc12960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 33341440 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128647168 unmapped: 33341440 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 232 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdfe5fa40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 232 ms_handle_reset con 0x55dbde5f4800 session 0x55dbe0d33e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 134971392 unmapped: 27017216 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 232 ms_handle_reset con 0x55dbdfef7800 session 0x55dbde878780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 232 ms_handle_reset con 0x55dbdfef6800 session 0x55dbde7b25a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 232 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe0d32d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 232 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0485c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 232 ms_handle_reset con 0x55dbde5f4800 session 0x55dbe0d581e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 232 heartbeat osd_stat(store_statfs(0x4f940f000/0x0/0x4ffc00000, data 0x2912c06/0x2a5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 232 ms_handle_reset con 0x55dbdfef7800 session 0x55dbe0d332c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 33284096 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1901728 data_alloc: 234881024 data_used: 12566528
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 33284096 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 232 ms_handle_reset con 0x55dbdfef6000 session 0x55dbde7b30e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 233 ms_handle_reset con 0x55dbdfef6400 session 0x55dbe0dae3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 33284096 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 233 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbddff3a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.964589119s of 10.672684669s, submitted: 130
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 233 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe07de960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 233 ms_handle_reset con 0x55dbde5f4800 session 0x55dbddc9a5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 33284096 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 233 ms_handle_reset con 0x55dbdfef7800 session 0x55dbe0daeb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 233 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe0cd83c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 33284096 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 234 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0750780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128729088 unmapped: 33259520 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 234 ms_handle_reset con 0x55dbddd4d400 session 0x55dbde801c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1910699 data_alloc: 234881024 data_used: 12582912
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 234 ms_handle_reset con 0x55dbdde18000 session 0x55dbdeb632c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 234 heartbeat osd_stat(store_statfs(0x4f9408000/0x0/0x4ffc00000, data 0x2916432/0x2a65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128729088 unmapped: 33259520 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128901120 unmapped: 33087488 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 234 heartbeat osd_stat(store_statfs(0x4f9409000/0x0/0x4ffc00000, data 0x2916390/0x2a64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128901120 unmapped: 33087488 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128901120 unmapped: 33087488 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128901120 unmapped: 33087488 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 235 ms_handle_reset con 0x55dbe00b1c00 session 0x55dbe01083c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1957281 data_alloc: 234881024 data_used: 18874368
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128901120 unmapped: 33087488 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128901120 unmapped: 33087488 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 235 heartbeat osd_stat(store_statfs(0x4f9406000/0x0/0x4ffc00000, data 0x2917e0f/0x2a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128901120 unmapped: 33087488 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.736453056s of 10.902111053s, submitted: 43
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128819200 unmapped: 33169408 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbddddab40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128819200 unmapped: 33169408 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1967171 data_alloc: 234881024 data_used: 18874368
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde87cb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 128819200 unmapped: 33169408 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 ms_handle_reset con 0x55dbddd4d400 session 0x55dbdfe61860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 ms_handle_reset con 0x55dbdde18000 session 0x55dbdeffe5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 ms_handle_reset con 0x55dbe00b0400 session 0x55dbe010a3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 30048256 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 heartbeat osd_stat(store_statfs(0x4f8fd0000/0x0/0x4ffc00000, data 0x2d4c882/0x2e9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 30031872 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdfe6cf00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 heartbeat osd_stat(store_statfs(0x4f8fa2000/0x0/0x4ffc00000, data 0x2d79892/0x2ecc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 132235264 unmapped: 29753344 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 132235264 unmapped: 29753344 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 ms_handle_reset con 0x55dbddd4d400 session 0x55dbde871680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe07ded20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2017620 data_alloc: 234881024 data_used: 19398656
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 ms_handle_reset con 0x55dbdde18000 session 0x55dbde871680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 132251648 unmapped: 29736960 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 132251648 unmapped: 29736960 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 heartbeat osd_stat(store_statfs(0x4f9026000/0x0/0x4ffc00000, data 0x2cf6882/0x2e48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 ms_handle_reset con 0x55dbe0d05400 session 0x55dbde87c1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe0109680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 237 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde806780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 237 ms_handle_reset con 0x55dbe00b0400 session 0x55dbe0dae3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 237 ms_handle_reset con 0x55dbddd4d400 session 0x55dbdd282780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 132259840 unmapped: 29728768 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 237 ms_handle_reset con 0x55dbdde18000 session 0x55dbdfe6cb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.641540527s of 10.126061440s, submitted: 138
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 237 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde878f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 131981312 unmapped: 30007296 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 29999104 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2056140 data_alloc: 234881024 data_used: 18886656
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 237 handle_osd_map epochs [237,238], i have 237, src has [1,238]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 237 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 238 ms_handle_reset con 0x55dbe00b0400 session 0x55dbdde494a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 132005888 unmapped: 29982720 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 238 ms_handle_reset con 0x55dbddd4d400 session 0x55dbdde45860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 238 ms_handle_reset con 0x55dbe0d05400 session 0x55dbdddda960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 238 ms_handle_reset con 0x55dbe0549c00 session 0x55dbdeb245a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 239 ms_handle_reset con 0x55dbe0d04400 session 0x55dbdddda780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 132014080 unmapped: 29974528 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 239 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe0daf860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 239 ms_handle_reset con 0x55dbddd4d400 session 0x55dbe0daf4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 29917184 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 240 ms_handle_reset con 0x55dbe0549800 session 0x55dbe0cd8780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 240 heartbeat osd_stat(store_statfs(0x4f8c4b000/0x0/0x4ffc00000, data 0x30c6c31/0x3221000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 241 ms_handle_reset con 0x55dbe0549400 session 0x55dbe06c8000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 133144576 unmapped: 28844032 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 241 ms_handle_reset con 0x55dbe0d04c00 session 0x55dbdde44780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 241 ms_handle_reset con 0x55dbe0d05400 session 0x55dbde7b2780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 28360704 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2098450 data_alloc: 234881024 data_used: 22474752
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 241 ms_handle_reset con 0x55dbe0549800 session 0x55dbde879e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 134676480 unmapped: 27312128 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 241 heartbeat osd_stat(store_statfs(0x4f8c46000/0x0/0x4ffc00000, data 0x30ca53c/0x3227000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 242 ms_handle_reset con 0x55dbe0d04400 session 0x55dbddddb2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 134684672 unmapped: 27303936 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 242 ms_handle_reset con 0x55dbdc4e5800 session 0x55dbdec53680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 242 ms_handle_reset con 0x55dbe0549000 session 0x55dbe0d19e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 242 handle_osd_map epochs [242,243], i have 242, src has [1,243]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 243 ms_handle_reset con 0x55dbe0546000 session 0x55dbdde205a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 243 ms_handle_reset con 0x55dbe0549800 session 0x55dbdde44b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 243 ms_handle_reset con 0x55dbddd4d400 session 0x55dbe0d4d2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 134701056 unmapped: 27287552 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 243 heartbeat osd_stat(store_statfs(0x4f8c3d000/0x0/0x4ffc00000, data 0x30ce20a/0x322f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 134701056 unmapped: 27287552 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.233895302s of 10.825933456s, submitted: 124
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 243 ms_handle_reset con 0x55dbe0d04400 session 0x55dbdfe61860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 134701056 unmapped: 27287552 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 244 ms_handle_reset con 0x55dbddd4d400 session 0x55dbdfe610e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2108590 data_alloc: 234881024 data_used: 22478848
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 27271168 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 244 ms_handle_reset con 0x55dbe0549000 session 0x55dbde7b30e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 244 heartbeat osd_stat(store_statfs(0x4f8c3d000/0x0/0x4ffc00000, data 0x30cf88b/0x3230000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 244 ms_handle_reset con 0x55dbe0549800 session 0x55dbde801c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 244 heartbeat osd_stat(store_statfs(0x4f8c3d000/0x0/0x4ffc00000, data 0x30cf88b/0x3230000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 134766592 unmapped: 27222016 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 245 ms_handle_reset con 0x55dbe0d04c00 session 0x55dbdde01680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 245 heartbeat osd_stat(store_statfs(0x4f8c39000/0x0/0x4ffc00000, data 0x30d1424/0x3233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 245 handle_osd_map epochs [245,246], i have 245, src has [1,246]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 27172864 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 246 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x30d310f/0x3236000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 246 ms_handle_reset con 0x55dbe0d05400 session 0x55dbdde20b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 246 ms_handle_reset con 0x55dbe0529000 session 0x55dbe07df860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 246 ms_handle_reset con 0x55dbddd4d400 session 0x55dbe0108b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 246 ms_handle_reset con 0x55dbe0546000 session 0x55dbdfe605a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 27140096 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 138428416 unmapped: 23560192 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 247 ms_handle_reset con 0x55dbe0549800 session 0x55dbe0daf680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2214332 data_alloc: 234881024 data_used: 23855104
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 247 ms_handle_reset con 0x55dbe0549000 session 0x55dbe0cd9860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 247 ms_handle_reset con 0x55dbe0549000 session 0x55dbdeb632c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 247 heartbeat osd_stat(store_statfs(0x4f8124000/0x0/0x4ffc00000, data 0x3bde5fd/0x3d42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 21528576 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 247 heartbeat osd_stat(store_statfs(0x4f8124000/0x0/0x4ffc00000, data 0x3bde5fd/0x3d42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 248 ms_handle_reset con 0x55dbddd4d400 session 0x55dbdfe5f4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 248 ms_handle_reset con 0x55dbe0529000 session 0x55dbde807a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 248 heartbeat osd_stat(store_statfs(0x4f80d6000/0x0/0x4ffc00000, data 0x3c29ea4/0x3d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139616256 unmapped: 22372352 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 248 heartbeat osd_stat(store_statfs(0x4f80d6000/0x0/0x4ffc00000, data 0x3c29ea4/0x3d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139649024 unmapped: 22339584 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 249 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe010b0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 249 ms_handle_reset con 0x55dbe0546000 session 0x55dbde6c3e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 21282816 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.259102821s of 10.219342232s, submitted: 268
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 249 ms_handle_reset con 0x55dbde5f4800 session 0x55dbe0c87c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 249 ms_handle_reset con 0x55dbdfef6400 session 0x55dbde877e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 250 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde806960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 250 ms_handle_reset con 0x55dbe0529000 session 0x55dbddc9b2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139411456 unmapped: 22577152 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 250 ms_handle_reset con 0x55dbddd4d400 session 0x55dbdfe632c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2110885 data_alloc: 234881024 data_used: 18161664
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 251 ms_handle_reset con 0x55dbe0546000 session 0x55dbddd1a000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 251 heartbeat osd_stat(store_statfs(0x4f8a79000/0x0/0x4ffc00000, data 0x328f1e1/0x33f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139411456 unmapped: 22577152 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 251 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe07de5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 251 ms_handle_reset con 0x55dbddd4c800 session 0x55dbdd2834a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 252 ms_handle_reset con 0x55dbddd4d400 session 0x55dbe0485860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139247616 unmapped: 22740992 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 252 handle_osd_map epochs [252,253], i have 252, src has [1,253]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 253 ms_handle_reset con 0x55dbddd4c800 session 0x55dbdfe60960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 253 ms_handle_reset con 0x55dbdfef6400 session 0x55dbe04852c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139272192 unmapped: 22716416 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 253 heartbeat osd_stat(store_statfs(0x4f8a4e000/0x0/0x4ffc00000, data 0x32b347a/0x341d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 22839296 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 253 heartbeat osd_stat(store_statfs(0x4f8a52000/0x0/0x4ffc00000, data 0x32b3418/0x341c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 253 handle_osd_map epochs [254,254], i have 254, src has [1,254]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139157504 unmapped: 22831104 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2124159 data_alloc: 234881024 data_used: 18186240
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139157504 unmapped: 22831104 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 256 ms_handle_reset con 0x55dbddd4c800 session 0x55dbe010a3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 256 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe010af00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139190272 unmapped: 22798336 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139403264 unmapped: 22585344 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139419648 unmapped: 22568960 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.145614624s of 10.001029968s, submitted: 252
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 257 heartbeat osd_stat(store_statfs(0x4f862d000/0x0/0x4ffc00000, data 0x32c421a/0x3431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 22478848 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2130898 data_alloc: 234881024 data_used: 18190336
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 22478848 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 22478848 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139526144 unmapped: 22462464 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 258 heartbeat osd_stat(store_statfs(0x4f8629000/0x0/0x4ffc00000, data 0x32c5c7d/0x3434000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139526144 unmapped: 22462464 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139526144 unmapped: 22462464 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2135500 data_alloc: 234881024 data_used: 18198528
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139526144 unmapped: 22462464 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 258 heartbeat osd_stat(store_statfs(0x4f8626000/0x0/0x4ffc00000, data 0x32c8c7d/0x3437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139526144 unmapped: 22462464 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 258 heartbeat osd_stat(store_statfs(0x4f8626000/0x0/0x4ffc00000, data 0x32c8c7d/0x3437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139526144 unmapped: 22462464 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139526144 unmapped: 22462464 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139534336 unmapped: 22454272 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2135820 data_alloc: 234881024 data_used: 18206720
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139534336 unmapped: 22454272 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139534336 unmapped: 22454272 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139534336 unmapped: 22454272 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 258 heartbeat osd_stat(store_statfs(0x4f8626000/0x0/0x4ffc00000, data 0x32c8c7d/0x3437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.235208511s of 14.398735046s, submitted: 67
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139616256 unmapped: 22372352 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 258 ms_handle_reset con 0x55dbddd4d400 session 0x55dbde7b25a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139624448 unmapped: 22364160 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2138500 data_alloc: 234881024 data_used: 18239488
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 258 heartbeat osd_stat(store_statfs(0x4f860d000/0x0/0x4ffc00000, data 0x32e1c7d/0x3450000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 139624448 unmapped: 22364160 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 259 ms_handle_reset con 0x55dbdfef6400 session 0x55dbe0cd90e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 140673024 unmapped: 21315584 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 140681216 unmapped: 21307392 heap: 161988608 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 259 heartbeat osd_stat(store_statfs(0x4f8609000/0x0/0x4ffc00000, data 0x32e385c/0x3454000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 259 handle_osd_map epochs [259,260], i have 259, src has [1,260]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 260 ms_handle_reset con 0x55dbe0546000 session 0x55dbdde445a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 260 ms_handle_reset con 0x55dbe0529000 session 0x55dbddc13e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163913728 unmapped: 17301504 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 260 ms_handle_reset con 0x55dbddd4c800 session 0x55dbe0dae000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 261 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdde45860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154861568 unmapped: 26353664 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 261 ms_handle_reset con 0x55dbe0546000 session 0x55dbe0daf0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2394726 data_alloc: 251658240 data_used: 28442624
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbdfef6400 session 0x55dbde6c34a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbddd4d400 session 0x55dbde8374a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154935296 unmapped: 26279936 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbddd4c800 session 0x55dbe0cd9e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdde49e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbe0529000 session 0x55dbe0daf0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154943488 unmapped: 26271744 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbe0549000 session 0x55dbde87d860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbe0546000 session 0x55dbdde445a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 heartbeat osd_stat(store_statfs(0x4f6dc9000/0x0/0x4ffc00000, data 0x4b1bc17/0x4c93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154943488 unmapped: 26271744 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 heartbeat osd_stat(store_statfs(0x4f6dc9000/0x0/0x4ffc00000, data 0x4b1bc17/0x4c93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154943488 unmapped: 26271744 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154943488 unmapped: 26271744 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2398029 data_alloc: 251658240 data_used: 28446720
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbe0549000 session 0x55dbde7b25a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.816864014s of 12.334029198s, submitted: 112
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbddd4c800 session 0x55dbe04852c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154943488 unmapped: 26271744 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdd2834a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbddd4d400 session 0x55dbddd1a000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbddd4c800 session 0x55dbde806960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde6c3e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbe0546000 session 0x55dbe010b0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbe0549000 session 0x55dbde807a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbe0529000 session 0x55dbdfe5f4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbddd4c800 session 0x55dbe0cd9860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe07de000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbe0529000 session 0x55dbe010ab40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150511616 unmapped: 30703616 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbe0546000 session 0x55dbe0d583c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 ms_handle_reset con 0x55dbe0549000 session 0x55dbde65ed20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 heartbeat osd_stat(store_statfs(0x4f65bc000/0x0/0x4ffc00000, data 0x532ac17/0x54a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 262 handle_osd_map epochs [262,263], i have 262, src has [1,263]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150577152 unmapped: 30638080 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbddd4c800 session 0x55dbdddc52c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150511616 unmapped: 30703616 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 heartbeat osd_stat(store_statfs(0x4f65b8000/0x0/0x4ffc00000, data 0x532c67a/0x54a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdfe61680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150511616 unmapped: 30703616 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2455606 data_alloc: 251658240 data_used: 28454912
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbe0529000 session 0x55dbdfe610e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbe0546000 session 0x55dbdfe60960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150822912 unmapped: 30392320 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbe053e800 session 0x55dbdde44780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150831104 unmapped: 30384128 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbddd4c800 session 0x55dbe07df680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150831104 unmapped: 30384128 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0108b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbe0529000 session 0x55dbde836000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbe0546000 session 0x55dbdfe60b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 151134208 unmapped: 30081024 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 heartbeat osd_stat(store_statfs(0x4f656f000/0x0/0x4ffc00000, data 0x537469a/0x54ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbe57a7000 session 0x55dbdfe61e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbddd4c800 session 0x55dbe0d59680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 151142400 unmapped: 30072832 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdeb252c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 heartbeat osd_stat(store_statfs(0x4f656f000/0x0/0x4ffc00000, data 0x537469a/0x54ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2468101 data_alloc: 251658240 data_used: 28459008
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 152043520 unmapped: 29171712 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160677888 unmapped: 20537344 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.749150276s of 11.918151855s, submitted: 50
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbe0549800 session 0x55dbde6c3a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbe0d04c00 session 0x55dbe06c8b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 15081472 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbe0d5ac00 session 0x55dbdfe6cd20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 heartbeat osd_stat(store_statfs(0x4f6545000/0x0/0x4ffc00000, data 0x539e69a/0x5519000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166248448 unmapped: 14966784 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166248448 unmapped: 14966784 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2627376 data_alloc: 268435456 data_used: 48259072
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbddd4c800 session 0x55dbdde20b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166305792 unmapped: 14909440 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbddd4d000 session 0x55dbddc9b4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166305792 unmapped: 14909440 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166305792 unmapped: 14909440 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 heartbeat osd_stat(store_statfs(0x4f656a000/0x0/0x4ffc00000, data 0x537a68a/0x54f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbe0549800 session 0x55dbdfe5e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166346752 unmapped: 14868480 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbe0d04c00 session 0x55dbddddb4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166363136 unmapped: 14852096 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2628544 data_alloc: 268435456 data_used: 49307648
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbe0d5bc00 session 0x55dbe0dafc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166346752 unmapped: 14868480 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 heartbeat osd_stat(store_statfs(0x4f6568000/0x0/0x4ffc00000, data 0x537a6fc/0x54f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166346752 unmapped: 14868480 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 heartbeat osd_stat(store_statfs(0x4f6568000/0x0/0x4ffc00000, data 0x537a6fc/0x54f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbddd4c800 session 0x55dbde87d680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.809894562s of 10.000350952s, submitted: 58
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 heartbeat osd_stat(store_statfs(0x4f64b3000/0x0/0x4ffc00000, data 0x54306ec/0x55ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde8001e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174014464 unmapped: 7200768 heap: 181215232 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbe0549800 session 0x55dbde87de00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 ms_handle_reset con 0x55dbe0d04c00 session 0x55dbdde48000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 175374336 unmapped: 9519104 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbdff24800 session 0x55dbe07510e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 175538176 unmapped: 9355264 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2778149 data_alloc: 268435456 data_used: 53366784
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 175554560 unmapped: 9338880 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 175554560 unmapped: 9338880 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174448640 unmapped: 10444800 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 heartbeat osd_stat(store_statfs(0x4f4686000/0x0/0x4ffc00000, data 0x60b1278/0x622e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe07514a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbddd4c800 session 0x55dbdec52f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 175054848 unmapped: 9838592 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 heartbeat osd_stat(store_statfs(0x4f4622000/0x0/0x4ffc00000, data 0x611f278/0x629c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 175054848 unmapped: 9838592 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2783447 data_alloc: 268435456 data_used: 53370880
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 175054848 unmapped: 9838592 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 175054848 unmapped: 9838592 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbe0d04c00 session 0x55dbde87cf00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174645248 unmapped: 10248192 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.030538559s of 10.544813156s, submitted: 134
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbdde492c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbdfffe400 session 0x55dbe0d321e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174661632 unmapped: 10231808 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbddd4c800 session 0x55dbde876d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 heartbeat osd_stat(store_statfs(0x4f461f000/0x0/0x4ffc00000, data 0x61222b8/0x629f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbe0549800 session 0x55dbe0cd83c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe010b4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174702592 unmapped: 10190848 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2781651 data_alloc: 268435456 data_used: 53370880
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbe0529000 session 0x55dbde8785a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbe0546000 session 0x55dbddd1b2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174751744 unmapped: 10141696 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbddd4c800 session 0x55dbe04843c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 heartbeat osd_stat(store_statfs(0x4f5502000/0x0/0x4ffc00000, data 0x523f2b8/0x53bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170426368 unmapped: 14467072 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 heartbeat osd_stat(store_statfs(0x4f5502000/0x0/0x4ffc00000, data 0x523f2b8/0x53bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171327488 unmapped: 13565952 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171745280 unmapped: 13148160 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171745280 unmapped: 13148160 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbe0485c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbe0546000 session 0x55dbe010a1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2651043 data_alloc: 268435456 data_used: 51957760
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdec52b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171778048 unmapped: 13115392 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbe0529000 session 0x55dbe0484d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbddd4c800 session 0x55dbddc292c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdfe601e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 heartbeat osd_stat(store_statfs(0x4f5503000/0x0/0x4ffc00000, data 0x523f216/0x53bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbddff3a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171802624 unmapped: 13090816 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171802624 unmapped: 13090816 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.826850891s of 10.190883636s, submitted: 97
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 ms_handle_reset con 0x55dbe0529000 session 0x55dbe072d860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171802624 unmapped: 13090816 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 265 ms_handle_reset con 0x55dbe0546000 session 0x55dbe0d8f680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 265 ms_handle_reset con 0x55dbddd4c800 session 0x55dbe0d8e000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 265 heartbeat osd_stat(store_statfs(0x4f54ff000/0x0/0x4ffc00000, data 0x5240de7/0x53be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 169181184 unmapped: 15712256 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 265 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0d8ed20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2565955 data_alloc: 268435456 data_used: 45850624
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 169181184 unmapped: 15712256 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 265 heartbeat osd_stat(store_statfs(0x4f5ad7000/0x0/0x4ffc00000, data 0x4c69d76/0x4de5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 169197568 unmapped: 15695872 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 169197568 unmapped: 15695872 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 265 ms_handle_reset con 0x55dbe0529000 session 0x55dbdec53a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 168714240 unmapped: 16179200 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 265 ms_handle_reset con 0x55dbe0549800 session 0x55dbdec52780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 168714240 unmapped: 16179200 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 265 ms_handle_reset con 0x55dbe0d04c00 session 0x55dbdeb47c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2569867 data_alloc: 268435456 data_used: 46206976
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 168828928 unmapped: 16064512 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 265 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbddd1b0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 168828928 unmapped: 16064512 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 heartbeat osd_stat(store_statfs(0x4f5ad7000/0x0/0x4ffc00000, data 0x4c6add8/0x4de7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 ms_handle_reset con 0x55dbddd4c800 session 0x55dbde6c2d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdde203c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 169254912 unmapped: 15638528 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 ms_handle_reset con 0x55dbe0529000 session 0x55dbde807860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 ms_handle_reset con 0x55dbe0549800 session 0x55dbe0750960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 ms_handle_reset con 0x55dbe57a6400 session 0x55dbde806d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 ms_handle_reset con 0x55dbe57a7800 session 0x55dbddc28000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.010036469s of 10.489717484s, submitted: 129
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 169246720 unmapped: 15646720 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 ms_handle_reset con 0x55dbddd4c800 session 0x55dbe04845a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 169254912 unmapped: 15638528 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 heartbeat osd_stat(store_statfs(0x4f588a000/0x0/0x4ffc00000, data 0x4e4682b/0x4fc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2579747 data_alloc: 268435456 data_used: 45850624
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 169254912 unmapped: 15638528 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdfe5e780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 ms_handle_reset con 0x55dbe0529000 session 0x55dbdeb47680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbe072c780
Oct 11 00:10:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/648b75f300e1a2924f32dd898123c5993d8d53dbc332d9ab3949d63aafd006e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 169254912 unmapped: 15638528 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 169271296 unmapped: 15622144 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdeb46000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 266 handle_osd_map epochs [267,267], i have 267, src has [1,267]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 267 ms_handle_reset con 0x55dbddd4c800 session 0x55dbdddda000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156270592 unmapped: 28622848 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 267 ms_handle_reset con 0x55dbe57a6400 session 0x55dbde87d4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 267 ms_handle_reset con 0x55dbe57a7800 session 0x55dbde8710e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 268 ms_handle_reset con 0x55dbddd4c800 session 0x55dbdec532c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 155303936 unmapped: 29589504 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2283144 data_alloc: 234881024 data_used: 26058752
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 155303936 unmapped: 29589504 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 268 heartbeat osd_stat(store_statfs(0x4f6e6a000/0x0/0x4ffc00000, data 0x3619fe9/0x3799000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 268 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde7b3a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 268 ms_handle_reset con 0x55dbe00b0400 session 0x55dbde87dc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 39157760 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0c874a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 39116800 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 39116800 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbdeb625a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.818698883s of 10.440370560s, submitted: 149
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe010a960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 ms_handle_reset con 0x55dbddd4c800 session 0x55dbde7b30e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde806780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 39116800 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbe07df0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 heartbeat osd_stat(store_statfs(0x4f80cc000/0x0/0x4ffc00000, data 0x2676961/0x27f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2087065 data_alloc: 234881024 data_used: 13774848
Oct 11 00:10:09 np0005480824 systemd[1]: Started libcrun container.
Oct 11 00:10:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/648b75f300e1a2924f32dd898123c5993d8d53dbc332d9ab3949d63aafd006e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 00:10:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/648b75f300e1a2924f32dd898123c5993d8d53dbc332d9ab3949d63aafd006e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 00:10:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/648b75f300e1a2924f32dd898123c5993d8d53dbc332d9ab3949d63aafd006e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 00:10:09 np0005480824 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/648b75f300e1a2924f32dd898123c5993d8d53dbc332d9ab3949d63aafd006e8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 ms_handle_reset con 0x55dbe00b0400 session 0x55dbdde49c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 heartbeat osd_stat(store_statfs(0x4f80cc000/0x0/0x4ffc00000, data 0x2676974/0x27f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbddddb680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145752064 unmapped: 39141376 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0c865a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145752064 unmapped: 39141376 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbe0c86f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 heartbeat osd_stat(store_statfs(0x4f80cd000/0x0/0x4ffc00000, data 0x2676912/0x27f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 269 handle_osd_map epochs [270,270], i have 270, src has [1,270]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 39362560 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 39362560 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 39362560 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2113374 data_alloc: 234881024 data_used: 16990208
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 39362560 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 270 heartbeat osd_stat(store_statfs(0x4f80c9000/0x0/0x4ffc00000, data 0x2678375/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 270 heartbeat osd_stat(store_statfs(0x4f80c9000/0x0/0x4ffc00000, data 0x2678375/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 39362560 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 39362560 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 270 heartbeat osd_stat(store_statfs(0x4f80c9000/0x0/0x4ffc00000, data 0x2678375/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 39362560 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 39362560 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 270 heartbeat osd_stat(store_statfs(0x4f80c9000/0x0/0x4ffc00000, data 0x2678375/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2113374 data_alloc: 234881024 data_used: 16990208
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 39362560 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 39362560 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 270 heartbeat osd_stat(store_statfs(0x4f80c9000/0x0/0x4ffc00000, data 0x2678375/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.449906349s of 13.540383339s, submitted: 36
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148406272 unmapped: 36487168 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 35536896 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 270 heartbeat osd_stat(store_statfs(0x4f796c000/0x0/0x4ffc00000, data 0x2dd6375/0x2f52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148078592 unmapped: 36814848 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 270 ms_handle_reset con 0x55dbe57a6400 session 0x55dbe0c86960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2179802 data_alloc: 234881024 data_used: 17281024
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148258816 unmapped: 36634624 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148258816 unmapped: 36634624 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 271 ms_handle_reset con 0x55dbdfffec00 session 0x55dbde879a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148193280 unmapped: 36700160 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148193280 unmapped: 36700160 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 272 ms_handle_reset con 0x55dbdfffe800 session 0x55dbe0cd8780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148193280 unmapped: 36700160 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 272 heartbeat osd_stat(store_statfs(0x4f7956000/0x0/0x4ffc00000, data 0x2de6ad1/0x2f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2185676 data_alloc: 234881024 data_used: 17555456
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148193280 unmapped: 36700160 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148193280 unmapped: 36700160 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 273 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdd930b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148193280 unmapped: 36700160 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.533718109s of 11.241610527s, submitted: 108
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148201472 unmapped: 36691968 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 274 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdfe6c5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 274 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbde65ed20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 274 ms_handle_reset con 0x55dbe04f5800 session 0x55dbdd930960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 274 heartbeat osd_stat(store_statfs(0x4f794f000/0x0/0x4ffc00000, data 0x2dea40d/0x2f6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148234240 unmapped: 36659200 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 275 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdfe6da40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2202029 data_alloc: 234881024 data_used: 17551360
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148234240 unmapped: 36659200 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 276 ms_handle_reset con 0x55dbe04f5800 session 0x55dbde870b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 276 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe07de000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148242432 unmapped: 36651008 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 276 ms_handle_reset con 0x55dbe57a6400 session 0x55dbe01090e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 276 handle_osd_map epochs [277,277], i have 277, src has [1,277]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 149291008 unmapped: 35602432 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 149291008 unmapped: 35602432 heap: 184893440 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 277 ms_handle_reset con 0x55dbdfffe800 session 0x55dbe0cd9e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148422656 unmapped: 43343872 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2292722 data_alloc: 234881024 data_used: 17575936
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 278 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde800000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 278 heartbeat osd_stat(store_statfs(0x4f6fb0000/0x0/0x4ffc00000, data 0x3780737/0x390d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148684800 unmapped: 43081728 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 278 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbddeda3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 278 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0d8eb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 278 ms_handle_reset con 0x55dbe0783800 session 0x55dbe0d59e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148684800 unmapped: 43081728 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 278 ms_handle_reset con 0x55dbe04f5800 session 0x55dbe0d8e5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148684800 unmapped: 43081728 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 278 ms_handle_reset con 0x55dbe0782c00 session 0x55dbde8792c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 278 ms_handle_reset con 0x55dbe57a6400 session 0x55dbe0485680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148692992 unmapped: 43073536 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.344999790s of 10.665838242s, submitted: 92
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 278 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe04850e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 278 heartbeat osd_stat(store_statfs(0x4f6f74000/0x0/0x4ffc00000, data 0x37c01d5/0x394a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148692992 unmapped: 43073536 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 278 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbe06c9c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2296737 data_alloc: 234881024 data_used: 17580032
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 279 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbddff3c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148914176 unmapped: 42852352 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 148914176 unmapped: 42852352 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 279 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbe06c85a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 280 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbddff32c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 280 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe072cf00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 151928832 unmapped: 39837696 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 280 ms_handle_reset con 0x55dbe0782c00 session 0x55dbddff2f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 280 ms_handle_reset con 0x55dbe57a6400 session 0x55dbddddb2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 37380096 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 280 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbde6c30e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 37380096 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 280 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbddddab40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2381508 data_alloc: 251658240 data_used: 27615232
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 281 ms_handle_reset con 0x55dbe0782c00 session 0x55dbddedbc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 281 heartbeat osd_stat(store_statfs(0x4f6f42000/0x0/0x4ffc00000, data 0x37ed9cb/0x397c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 37380096 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 282 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbde6c34a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 282 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe010a960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154443776 unmapped: 37322752 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 283 heartbeat osd_stat(store_statfs(0x4f6f37000/0x0/0x4ffc00000, data 0x37f2b60/0x3985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154484736 unmapped: 37281792 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 284 handle_osd_map epochs [284,284], i have 284, src has [1,284]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 284 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe010a780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154509312 unmapped: 37257216 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 285 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbde8794a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 285 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbde7b3860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.222435951s of 10.015085220s, submitted: 51
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 285 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde6c3860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154574848 unmapped: 37191680 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2396280 data_alloc: 251658240 data_used: 27713536
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 285 ms_handle_reset con 0x55dbe3246400 session 0x55dbde8712c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154583040 unmapped: 37183488 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 286 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbddc9a3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 154583040 unmapped: 37183488 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 287 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbe0c87860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 287 ms_handle_reset con 0x55dbe0782c00 session 0x55dbddddb2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 287 heartbeat osd_stat(store_statfs(0x4f6f2d000/0x0/0x4ffc00000, data 0x37f82c9/0x398f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 155648000 unmapped: 36118528 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161357824 unmapped: 30408704 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 288 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbddeda3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161284096 unmapped: 30482432 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2521050 data_alloc: 251658240 data_used: 28631040
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160882688 unmapped: 30883840 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160940032 unmapped: 30826496 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 289 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbdde001e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 289 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbde7b3a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 289 heartbeat osd_stat(store_statfs(0x4f626b000/0x0/0x4ffc00000, data 0x44b75e8/0x4652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160980992 unmapped: 30785536 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 290 handle_osd_map epochs [290,290], i have 290, src has [1,290]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161005568 unmapped: 30760960 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.597262383s of 10.050133705s, submitted: 221
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 290 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbddff2780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161005568 unmapped: 30760960 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2539561 data_alloc: 251658240 data_used: 29937664
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161005568 unmapped: 30760960 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161021952 unmapped: 30744576 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161038336 unmapped: 30728192 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 292 heartbeat osd_stat(store_statfs(0x4f625f000/0x0/0x4ffc00000, data 0x44bf317/0x465d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 292 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbdde485a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161562624 unmapped: 30203904 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 292 handle_osd_map epochs [293,293], i have 293, src has [1,293]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160497664 unmapped: 31268864 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2556421 data_alloc: 251658240 data_used: 30347264
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160514048 unmapped: 31252480 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 294 handle_osd_map epochs [295,295], i have 295, src has [1,295]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 296 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdde481e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160686080 unmapped: 31080448 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 296 ms_handle_reset con 0x55dbe0782c00 session 0x55dbdfe5e000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 296 heartbeat osd_stat(store_statfs(0x4f5dd9000/0x0/0x4ffc00000, data 0x453261a/0x46d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160686080 unmapped: 31080448 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 297 ms_handle_reset con 0x55dbe0782c00 session 0x55dbe0d583c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160702464 unmapped: 31064064 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 297 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbdeffed20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.977331161s of 10.446913719s, submitted: 147
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160727040 unmapped: 31039488 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2568329 data_alloc: 251658240 data_used: 30355456
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 298 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbdd930000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160751616 unmapped: 31014912 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 298 heartbeat osd_stat(store_statfs(0x4f5dd1000/0x0/0x4ffc00000, data 0x45389d5/0x46dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 298 heartbeat osd_stat(store_statfs(0x4f5dd1000/0x0/0x4ffc00000, data 0x45389d5/0x46dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160759808 unmapped: 31006720 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 299 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdddc52c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160776192 unmapped: 30990336 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160776192 unmapped: 30990336 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 299 heartbeat osd_stat(store_statfs(0x4f5dce000/0x0/0x4ffc00000, data 0x453a402/0x46de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160776192 unmapped: 30990336 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2573014 data_alloc: 251658240 data_used: 30351360
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 299 heartbeat osd_stat(store_statfs(0x4f5dce000/0x0/0x4ffc00000, data 0x453a402/0x46de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160784384 unmapped: 30982144 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 299 heartbeat osd_stat(store_statfs(0x4f5dce000/0x0/0x4ffc00000, data 0x453a402/0x46de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160808960 unmapped: 30957568 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160808960 unmapped: 30957568 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160808960 unmapped: 30957568 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 300 heartbeat osd_stat(store_statfs(0x4f5dcc000/0x0/0x4ffc00000, data 0x453be85/0x46e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160808960 unmapped: 30957568 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2575972 data_alloc: 251658240 data_used: 30363648
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 300 heartbeat osd_stat(store_statfs(0x4f5dcc000/0x0/0x4ffc00000, data 0x453be85/0x46e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160808960 unmapped: 30957568 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.593493462s of 11.450242996s, submitted: 89
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 300 ms_handle_reset con 0x55dbe3246000 session 0x55dbdd283c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 300 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe0dae3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 handle_osd_map epochs [301,301], i have 301, src has [1,301]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161062912 unmapped: 30703616 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbe3246800 session 0x55dbde877e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 heartbeat osd_stat(store_statfs(0x4f5db2000/0x0/0x4ffc00000, data 0x4582ea8/0x46fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161079296 unmapped: 30687232 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161087488 unmapped: 30679040 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbe0cd8000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbe010b680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe010b0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbdde45a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde877a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbe06c90e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161112064 unmapped: 30654464 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbdddda3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2592976 data_alloc: 251658240 data_used: 30633984
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdeb632c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbe0783800 session 0x55dbe06c8960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 heartbeat osd_stat(store_statfs(0x4f5e1e000/0x0/0x4ffc00000, data 0x4515a25/0x4690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 155795456 unmapped: 35971072 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbddd4d000 session 0x55dbe0750b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe0cd81e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbdddda3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbdde45a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 155811840 unmapped: 35954688 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbddc29a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 heartbeat osd_stat(store_statfs(0x4f7212000/0x0/0x4ffc00000, data 0x2e9f971/0x3019000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 301 handle_osd_map epochs [302,302], i have 302, src has [1,302]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 302 ms_handle_reset con 0x55dbddd4c800 session 0x55dbdde49c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 302 heartbeat osd_stat(store_statfs(0x4f7212000/0x0/0x4ffc00000, data 0x2e9f971/0x3019000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 155828224 unmapped: 35938304 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 303 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbde877e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 303 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbde65f680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 155901952 unmapped: 35864576 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 155901952 unmapped: 35864576 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 303 ms_handle_reset con 0x55dbddd4d000 session 0x55dbdeffed20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 303 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbdeb63e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209836 data_alloc: 234881024 data_used: 14159872
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 303 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbdfe5e000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 303 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe0d59a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150708224 unmapped: 41058304 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 303 heartbeat osd_stat(store_statfs(0x4f7f79000/0x0/0x4ffc00000, data 0x238d077/0x2532000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.932383537s of 10.666641235s, submitted: 195
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150708224 unmapped: 41058304 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150708224 unmapped: 41058304 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 304 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde878780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 304 ms_handle_reset con 0x55dbde5f5c00 session 0x55dbde65e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150708224 unmapped: 41058304 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbde7b23c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbe0dae960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbddd4c800 session 0x55dbddeda3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbddc9a3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150749184 unmapped: 41017344 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbddd4d000 session 0x55dbde6c3860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f7f72000/0x0/0x4ffc00000, data 0x23907a9/0x253b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2220351 data_alloc: 234881024 data_used: 13897728
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbde7b3860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150757376 unmapped: 41009152 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbde6c34a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe0cd8f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150773760 unmapped: 40992768 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbddd4c800 session 0x55dbdfe63a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150659072 unmapped: 41107456 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbe0783800 session 0x55dbde87c1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150659072 unmapped: 41107456 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f7f75000/0x0/0x4ffc00000, data 0x2390737/0x2539000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe0d590e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbde8765a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150659072 unmapped: 41107456 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2213451 data_alloc: 234881024 data_used: 13901824
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdde490e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbddd4c800 session 0x55dbdde49e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbe0783800 session 0x55dbdefff0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 ms_handle_reset con 0x55dbe3246800 session 0x55dbe0750000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 149856256 unmapped: 41910272 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 149856256 unmapped: 41910272 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.957964897s of 10.434654236s, submitted: 108
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 306 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe0751680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f7f72000/0x0/0x4ffc00000, data 0x239218a/0x253b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150904832 unmapped: 40861696 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150904832 unmapped: 40861696 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 306 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbe07514a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 306 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdeffe5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 306 ms_handle_reset con 0x55dbddd4c800 session 0x55dbdec52780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150904832 unmapped: 40861696 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 306 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbde8001e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2263050 data_alloc: 234881024 data_used: 12861440
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 306 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbe0751680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150880256 unmapped: 40886272 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f7a15000/0x0/0x4ffc00000, data 0x28f018a/0x2a99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150880256 unmapped: 40886272 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 307 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdefff0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150880256 unmapped: 40886272 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 307 heartbeat osd_stat(store_statfs(0x4f7a11000/0x0/0x4ffc00000, data 0x28f1d07/0x2a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 307 ms_handle_reset con 0x55dbdfef7800 session 0x55dbde6c2d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 307 ms_handle_reset con 0x55dbe3246800 session 0x55dbdde490e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150749184 unmapped: 41017344 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150749184 unmapped: 41017344 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2269417 data_alloc: 234881024 data_used: 12869632
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150749184 unmapped: 41017344 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 308 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe0d590e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150765568 unmapped: 41000960 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.179352760s of 10.565466881s, submitted: 81
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 308 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde6c23c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 308 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbe0cd8f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150765568 unmapped: 41000960 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 308 heartbeat osd_stat(store_statfs(0x4f7a0b000/0x0/0x4ffc00000, data 0x28f3958/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150765568 unmapped: 41000960 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 ms_handle_reset con 0x55dbe3246800 session 0x55dbdfe5f2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 ms_handle_reset con 0x55dbdfef7800 session 0x55dbde6c3860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150773760 unmapped: 40992768 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe0d59a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbdfe5e000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2282487 data_alloc: 234881024 data_used: 12881920
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdeffed20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 ms_handle_reset con 0x55dbe3246800 session 0x55dbde65f680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 ms_handle_reset con 0x55dbe4be9c00 session 0x55dbdde49c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150773760 unmapped: 40992768 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdddda3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 ms_handle_reset con 0x55dbe3246800 session 0x55dbe06c8960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150773760 unmapped: 40992768 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 heartbeat osd_stat(store_statfs(0x4f7a05000/0x0/0x4ffc00000, data 0x28f55b9/0x2aa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 ms_handle_reset con 0x55dbe00c3400 session 0x55dbde87c1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 150921216 unmapped: 40845312 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 ms_handle_reset con 0x55dbe00c2000 session 0x55dbdde49e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 151805952 unmapped: 39960576 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 ms_handle_reset con 0x55dbe00c2800 session 0x55dbde87dc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 ms_handle_reset con 0x55dbe00c2000 session 0x55dbdde5ef00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 151822336 unmapped: 39944192 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 heartbeat osd_stat(store_statfs(0x4f7a05000/0x0/0x4ffc00000, data 0x28f55b9/0x2aa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 309 handle_osd_map epochs [310,310], i have 310, src has [1,310]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 310 ms_handle_reset con 0x55dbe00c3400 session 0x55dbdeb47680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2326243 data_alloc: 234881024 data_used: 18325504
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 310 ms_handle_reset con 0x55dbe3246800 session 0x55dbdd930960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 310 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe07deb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 310 ms_handle_reset con 0x55dbdffa2800 session 0x55dbddff25a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 151289856 unmapped: 40476672 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 310 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdde48f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 310 ms_handle_reset con 0x55dbe00c3400 session 0x55dbdfe5fa40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 151289856 unmapped: 40476672 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 310 ms_handle_reset con 0x55dbdffa2c00 session 0x55dbdde452c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 311 ms_handle_reset con 0x55dbe3246800 session 0x55dbe010a000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 311 heartbeat osd_stat(store_statfs(0x4f7a02000/0x0/0x4ffc00000, data 0x28f718a/0x2aac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 311 ms_handle_reset con 0x55dbe00c2000 session 0x55dbde879c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 151298048 unmapped: 40468480 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 311 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde7b30e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.900808334s of 11.167237282s, submitted: 62
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 151396352 unmapped: 40370176 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 311 ms_handle_reset con 0x55dbdffa2c00 session 0x55dbdddda000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 311 ms_handle_reset con 0x55dbe00c3400 session 0x55dbdfe5e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 311 heartbeat osd_stat(store_statfs(0x4f7a01000/0x0/0x4ffc00000, data 0x28f8c87/0x2aac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 311 ms_handle_reset con 0x55dbe3246800 session 0x55dbde65e960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 311 ms_handle_reset con 0x55dbe3522000 session 0x55dbde878b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 311 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbddc294a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 151519232 unmapped: 40247296 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2358709 data_alloc: 234881024 data_used: 18321408
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 311 ms_handle_reset con 0x55dbdffa2c00 session 0x55dbe0485860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 151519232 unmapped: 40247296 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 311 ms_handle_reset con 0x55dbe3246800 session 0x55dbde806780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 312 ms_handle_reset con 0x55dbe3522000 session 0x55dbe010a960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 313 ms_handle_reset con 0x55dbe0541400 session 0x55dbdfe61e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 151822336 unmapped: 39944192 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 313 handle_osd_map epochs [313,314], i have 313, src has [1,314]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 314 ms_handle_reset con 0x55dbe0540000 session 0x55dbde65e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 151887872 unmapped: 39878656 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 314 heartbeat osd_stat(store_statfs(0x4f6fef000/0x0/0x4ffc00000, data 0x33052f5/0x34bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [0,0,0,0,0,0,1,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 314 ms_handle_reset con 0x55dbe00c3400 session 0x55dbe0c86f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 314 ms_handle_reset con 0x55dbdffa2c00 session 0x55dbe07deb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 314 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde877c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156106752 unmapped: 35659776 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158408704 unmapped: 33357824 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2494492 data_alloc: 234881024 data_used: 18325504
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 314 ms_handle_reset con 0x55dbe3522000 session 0x55dbdeb46f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156893184 unmapped: 34873344 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 315 ms_handle_reset con 0x55dbe3246800 session 0x55dbde7b2b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 315 ms_handle_reset con 0x55dbdffa2c00 session 0x55dbddc13a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156893184 unmapped: 34873344 heap: 191766528 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649248627' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 315 ms_handle_reset con 0x55dbe00c3400 session 0x55dbde876000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 315 ms_handle_reset con 0x55dbe3247c00 session 0x55dbdde203c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 315 ms_handle_reset con 0x55dbe4f9c800 session 0x55dbe06c90e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156581888 unmapped: 42541056 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 316 ms_handle_reset con 0x55dbe0540000 session 0x55dbdec52780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 316 ms_handle_reset con 0x55dbdffa2c00 session 0x55dbddff3a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 316 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbddd1b0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 316 ms_handle_reset con 0x55dbe00c3400 session 0x55dbdde49860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 316 ms_handle_reset con 0x55dbe3246800 session 0x55dbe010b680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156532736 unmapped: 42590208 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f4da0000/0x0/0x4ffc00000, data 0x43af5de/0x456d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 316 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde870f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156532736 unmapped: 42590208 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 316 ms_handle_reset con 0x55dbdffa2c00 session 0x55dbde6c34a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2567447 data_alloc: 234881024 data_used: 18898944
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156532736 unmapped: 42590208 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 316 ms_handle_reset con 0x55dbe00c3400 session 0x55dbdfe60000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 316 ms_handle_reset con 0x55dbe3247c00 session 0x55dbdfe60d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f4da0000/0x0/0x4ffc00000, data 0x43af5de/0x456d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 316 handle_osd_map epochs [316,317], i have 316, src has [1,317]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.934004784s of 12.688928604s, submitted: 216
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 317 ms_handle_reset con 0x55dbe3247800 session 0x55dbdde205a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 317 ms_handle_reset con 0x55dbe0540000 session 0x55dbe0750780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156532736 unmapped: 42590208 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 317 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdeffe3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 317 ms_handle_reset con 0x55dbe00c3400 session 0x55dbdeb632c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156532736 unmapped: 42590208 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 317 ms_handle_reset con 0x55dbe3247c00 session 0x55dbde8372c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 318 ms_handle_reset con 0x55dbdffa2c00 session 0x55dbe0cd85a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156524544 unmapped: 42598400 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 318 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde807860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 318 heartbeat osd_stat(store_statfs(0x4f4d7a000/0x0/0x4ffc00000, data 0x43d3d0e/0x4592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156524544 unmapped: 42598400 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 318 ms_handle_reset con 0x55dbe00c3400 session 0x55dbde8012c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2571053 data_alloc: 234881024 data_used: 18894848
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156524544 unmapped: 42598400 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 318 ms_handle_reset con 0x55dbe0540000 session 0x55dbe0108d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156524544 unmapped: 42598400 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 319 ms_handle_reset con 0x55dbe3247400 session 0x55dbdefffa40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156532736 unmapped: 42590208 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 320 ms_handle_reset con 0x55dbe3525400 session 0x55dbe0d8ed20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 320 ms_handle_reset con 0x55dbe3247c00 session 0x55dbe0751a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156540928 unmapped: 42582016 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 320 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdde001e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 320 ms_handle_reset con 0x55dbe00c3400 session 0x55dbe0dae780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156540928 unmapped: 42582016 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2587824 data_alloc: 234881024 data_used: 18911232
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 320 heartbeat osd_stat(store_statfs(0x4f4d72000/0x0/0x4ffc00000, data 0x43d740a/0x459c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 320 ms_handle_reset con 0x55dbe3247400 session 0x55dbdec53680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 320 ms_handle_reset con 0x55dbe0540000 session 0x55dbe06c85a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157597696 unmapped: 41525248 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 320 ms_handle_reset con 0x55dbe00c3400 session 0x55dbe0cd9e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 320 ms_handle_reset con 0x55dbe3247400 session 0x55dbde6c34a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.408063889s of 10.165842056s, submitted: 71
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157614080 unmapped: 41508864 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 321 ms_handle_reset con 0x55dbe3247c00 session 0x55dbddff3a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 321 ms_handle_reset con 0x55dbe3525800 session 0x55dbdde445a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160169984 unmapped: 38952960 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 321 handle_osd_map epochs [322,322], i have 322, src has [1,322]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 322 ms_handle_reset con 0x55dbe3524000 session 0x55dbe0d59a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160169984 unmapped: 38952960 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f4d6c000/0x0/0x4ffc00000, data 0x43da9dc/0x45a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160169984 unmapped: 38952960 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2650459 data_alloc: 234881024 data_used: 26791936
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f4d69000/0x0/0x4ffc00000, data 0x43dd9dc/0x45a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,0,0,2])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f4d69000/0x0/0x4ffc00000, data 0x43dd9dc/0x45a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160169984 unmapped: 38952960 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 322 ms_handle_reset con 0x55dbe3524400 session 0x55dbdec52780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f4d69000/0x0/0x4ffc00000, data 0x43dd9dc/0x45a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f4d6b000/0x0/0x4ffc00000, data 0x43dd9cc/0x45a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160178176 unmapped: 38944768 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 322 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbddc29a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 322 ms_handle_reset con 0x55dbe00c3400 session 0x55dbdfe5e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 322 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbdeb46b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 322 ms_handle_reset con 0x55dbe3247400 session 0x55dbe0d59680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160202752 unmapped: 38920192 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160202752 unmapped: 38920192 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 323 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbddedbe00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 323 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x43f05b7/0x45b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160194560 unmapped: 38928384 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 323 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbdefff0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 323 ms_handle_reset con 0x55dbe00c3400 session 0x55dbdd930f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2656717 data_alloc: 234881024 data_used: 26791936
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160194560 unmapped: 38928384 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.831022739s of 10.033343315s, submitted: 110
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 324 ms_handle_reset con 0x55dbe3247400 session 0x55dbdeffed20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 324 ms_handle_reset con 0x55dbe3524400 session 0x55dbe0751680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161267712 unmapped: 37855232 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 324 heartbeat osd_stat(store_statfs(0x4f4d51000/0x0/0x4ffc00000, data 0x43f1fc7/0x45ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161718272 unmapped: 37404672 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 324 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe0cd9680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 324 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbde8792c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 162914304 unmapped: 36208640 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 162996224 unmapped: 36126720 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2770915 data_alloc: 234881024 data_used: 26935296
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 162996224 unmapped: 36126720 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 162996224 unmapped: 36126720 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 324 heartbeat osd_stat(store_statfs(0x4f424b000/0x0/0x4ffc00000, data 0x527bf55/0x50c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163004416 unmapped: 36118528 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 324 ms_handle_reset con 0x55dbe00c3400 session 0x55dbe0485e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160915456 unmapped: 38207488 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160940032 unmapped: 38182912 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2774142 data_alloc: 234881024 data_used: 26935296
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 325 ms_handle_reset con 0x55dbe3247400 session 0x55dbe04854a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160956416 unmapped: 38166528 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 325 ms_handle_reset con 0x55dbe3525400 session 0x55dbe0c87860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 325 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdde201e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.486744881s of 10.058712959s, submitted: 114
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 325 ms_handle_reset con 0x55dbe3525400 session 0x55dbe06c90e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160956416 unmapped: 38166528 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 325 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbde8005a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 325 ms_handle_reset con 0x55dbe00c3400 session 0x55dbdec53c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 325 ms_handle_reset con 0x55dbdc4e2c00 session 0x55dbde65f680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 325 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbdde20b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160964608 unmapped: 38158336 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 325 heartbeat osd_stat(store_statfs(0x4f4244000/0x0/0x4ffc00000, data 0x5280b2b/0x50ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 325 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde87d860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160964608 unmapped: 38158336 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 325 ms_handle_reset con 0x55dbe00c3400 session 0x55dbe0d59e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160964608 unmapped: 38158336 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2779102 data_alloc: 234881024 data_used: 27070464
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 326 ms_handle_reset con 0x55dbe3525400 session 0x55dbde87c000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160669696 unmapped: 38453248 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 327 ms_handle_reset con 0x55dbe04f4400 session 0x55dbde876b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 327 ms_handle_reset con 0x55dbe3247400 session 0x55dbe0daf2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 327 ms_handle_reset con 0x55dbe3525c00 session 0x55dbde806f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159547392 unmapped: 39575552 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 327 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbde8010e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 327 ms_handle_reset con 0x55dbe3525800 session 0x55dbe0cd90e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 327 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe0d58f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156524544 unmapped: 42598400 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 327 heartbeat osd_stat(store_statfs(0x4f5ed8000/0x0/0x4ffc00000, data 0x326e51a/0x3436000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 327 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbde801e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 327 ms_handle_reset con 0x55dbe3247400 session 0x55dbdeb46b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 327 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe0751c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 327 ms_handle_reset con 0x55dbe3525800 session 0x55dbde879860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156540928 unmapped: 42582016 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 156540928 unmapped: 42582016 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2497058 data_alloc: 234881024 data_used: 19070976
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 327 handle_osd_map epochs [328,328], i have 328, src has [1,328]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 328 ms_handle_reset con 0x55dbe3525c00 session 0x55dbdec53e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157745152 unmapped: 41377792 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157745152 unmapped: 41377792 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.523613930s of 10.776693344s, submitted: 309
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 329 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbddff2f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 329 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdde49680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157753344 unmapped: 41369600 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 329 heartbeat osd_stat(store_statfs(0x4f5ece000/0x0/0x4ffc00000, data 0x3271c40/0x343e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 329 ms_handle_reset con 0x55dbe3247400 session 0x55dbde879c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157753344 unmapped: 41369600 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 329 handle_osd_map epochs [330,330], i have 330, src has [1,330]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157761536 unmapped: 41361408 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 330 ms_handle_reset con 0x55dbe3525800 session 0x55dbdfe614a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2522103 data_alloc: 234881024 data_used: 19734528
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157761536 unmapped: 41361408 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157761536 unmapped: 41361408 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 330 ms_handle_reset con 0x55dbe00c3400 session 0x55dbddd1b2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 330 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe0d59a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 330 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbdfe63a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 330 ms_handle_reset con 0x55dbe3247400 session 0x55dbddc9a5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157761536 unmapped: 41361408 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 330 handle_osd_map epochs [330,331], i have 330, src has [1,331]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 331 ms_handle_reset con 0x55dbe3525800 session 0x55dbdfe612c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157761536 unmapped: 41361408 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 331 heartbeat osd_stat(store_statfs(0x4f5ec7000/0x0/0x4ffc00000, data 0x327539f/0x3446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 331 ms_handle_reset con 0x55dbe3525400 session 0x55dbdeb47680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 331 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbddff3a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 331 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe0d590e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157769728 unmapped: 41353216 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 331 heartbeat osd_stat(store_statfs(0x4f5ec7000/0x0/0x4ffc00000, data 0x327539f/0x3446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2529538 data_alloc: 234881024 data_used: 19750912
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 331 ms_handle_reset con 0x55dbe3247400 session 0x55dbe07df0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157769728 unmapped: 41353216 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 331 ms_handle_reset con 0x55dbe3525800 session 0x55dbdeb47680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157777920 unmapped: 41345024 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 331 ms_handle_reset con 0x55dbe04f5000 session 0x55dbde6c34a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.574402809s of 10.048239708s, submitted: 29
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 331 heartbeat osd_stat(store_statfs(0x4f5ec7000/0x0/0x4ffc00000, data 0x3275401/0x3447000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 332 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe06c85a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 332 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbdde001e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157827072 unmapped: 41295872 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 332 handle_osd_map epochs [332,333], i have 332, src has [1,333]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 333 ms_handle_reset con 0x55dbe04f4800 session 0x55dbddc9a5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 333 ms_handle_reset con 0x55dbe3247400 session 0x55dbdddda3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 333 ms_handle_reset con 0x55dbe3525800 session 0x55dbddff2f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157843456 unmapped: 41279488 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 333 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde806780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 333 handle_osd_map epochs [333,334], i have 333, src has [1,334]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 334 ms_handle_reset con 0x55dbe04f4800 session 0x55dbdfe6cd20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 334 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbde879860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 334 ms_handle_reset con 0x55dbe3247400 session 0x55dbe0d58f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157876224 unmapped: 41246720 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 334 ms_handle_reset con 0x55dbe3525800 session 0x55dbde8010e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 334 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbde806f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2543598 data_alloc: 234881024 data_used: 19759104
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 334 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde876b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 334 ms_handle_reset con 0x55dbe04f4800 session 0x55dbdde20b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157876224 unmapped: 41246720 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157876224 unmapped: 41246720 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 335 heartbeat osd_stat(store_statfs(0x4f5aab000/0x0/0x4ffc00000, data 0x327c348/0x3452000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157876224 unmapped: 41246720 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 335 ms_handle_reset con 0x55dbe3247400 session 0x55dbe06c90e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 335 ms_handle_reset con 0x55dbe4f9b000 session 0x55dbdde201e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157892608 unmapped: 41230336 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 335 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbde7b34a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 335 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde6c3a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157900800 unmapped: 41222144 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2545482 data_alloc: 234881024 data_used: 19771392
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 335 ms_handle_reset con 0x55dbe04f4800 session 0x55dbde8792c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 335 ms_handle_reset con 0x55dbe3247400 session 0x55dbdfe60b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157908992 unmapped: 41213952 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 335 ms_handle_reset con 0x55dbe4f9ac00 session 0x55dbddddbc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 335 handle_osd_map epochs [335,336], i have 335, src has [1,336]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157933568 unmapped: 41189376 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.614200592s of 10.004161835s, submitted: 123
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 336 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe07510e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 336 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbe0cd8f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 336 ms_handle_reset con 0x55dbe3247400 session 0x55dbe06c8000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 157958144 unmapped: 41164800 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 337 ms_handle_reset con 0x55dbe4f9a800 session 0x55dbdefff860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 337 heartbeat osd_stat(store_statfs(0x4f5aa9000/0x0/0x4ffc00000, data 0x327ddd3/0x3454000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 337 ms_handle_reset con 0x55dbe0542800 session 0x55dbdd282f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 337 ms_handle_reset con 0x55dbe4f9a000 session 0x55dbdde45860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158015488 unmapped: 41107456 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 338 ms_handle_reset con 0x55dbe0543800 session 0x55dbdfe6d2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 338 ms_handle_reset con 0x55dbe04f4800 session 0x55dbe010a960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158064640 unmapped: 41058304 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2564309 data_alloc: 234881024 data_used: 19787776
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 338 handle_osd_map epochs [338,339], i have 338, src has [1,339]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 338 handle_osd_map epochs [339,339], i have 339, src has [1,339]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 339 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbddff32c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 339 heartbeat osd_stat(store_statfs(0x4f5aa0000/0x0/0x4ffc00000, data 0x32815f3/0x345d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 339 ms_handle_reset con 0x55dbdc4e3800 session 0x55dbde876000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158121984 unmapped: 41000960 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 339 handle_osd_map epochs [339,340], i have 339, src has [1,340]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 339 handle_osd_map epochs [340,340], i have 340, src has [1,340]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 340 ms_handle_reset con 0x55dbe0542800 session 0x55dbe0d8e000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158171136 unmapped: 40951808 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 341 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbdec53c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158310400 unmapped: 40812544 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 342 ms_handle_reset con 0x55dbe04f4800 session 0x55dbddc9b4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 342 ms_handle_reset con 0x55dbe0543800 session 0x55dbddff34a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158367744 unmapped: 40755200 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 342 ms_handle_reset con 0x55dbe3247400 session 0x55dbe0cd8960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 342 ms_handle_reset con 0x55dbe4f9a000 session 0x55dbdfe6d2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f5a96000/0x0/0x4ffc00000, data 0x328843d/0x3466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158375936 unmapped: 40747008 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2578942 data_alloc: 234881024 data_used: 20598784
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 342 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbdd282f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158392320 unmapped: 40730624 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158392320 unmapped: 40730624 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 342 handle_osd_map epochs [343,343], i have 343, src has [1,343]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 ms_handle_reset con 0x55dbe04f4800 session 0x55dbe06c8000
Oct 11 00:10:09 np0005480824 podman[311757]: 2025-10-11 04:10:09.851956732 +0000 UTC m=+0.224090409 container init 403ccab3c2834bc4ead6bf83bde531539fa7fd7815de723601975f1045d6a5c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.347848892s of 10.129285812s, submitted: 246
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 ms_handle_reset con 0x55dbe0542800 session 0x55dbe07510e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 ms_handle_reset con 0x55dbe0543800 session 0x55dbdfe60b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbde6c3a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158433280 unmapped: 40689664 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 ms_handle_reset con 0x55dbe04f4800 session 0x55dbde7b34a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 heartbeat osd_stat(store_statfs(0x4f5a92000/0x0/0x4ffc00000, data 0x3289f66/0x346b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 ms_handle_reset con 0x55dbe0542800 session 0x55dbe06c90e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 ms_handle_reset con 0x55dbe4f9a800 session 0x55dbe0d4d2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 ms_handle_reset con 0x55dbe4f9a000 session 0x55dbde806f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158498816 unmapped: 40624128 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbdeb47680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158498816 unmapped: 40624128 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 ms_handle_reset con 0x55dbe04f4800 session 0x55dbe07df0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2589042 data_alloc: 234881024 data_used: 20611072
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 ms_handle_reset con 0x55dbe4f9a800 session 0x55dbe0d323c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158507008 unmapped: 40615936 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 heartbeat osd_stat(store_statfs(0x4f5a92000/0x0/0x4ffc00000, data 0x3289f66/0x346b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 ms_handle_reset con 0x55dbe3245c00 session 0x55dbe010a1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 344 ms_handle_reset con 0x55dbe3244800 session 0x55dbdfe61860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 344 ms_handle_reset con 0x55dbe3245400 session 0x55dbdddda000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 344 heartbeat osd_stat(store_statfs(0x4f5a8d000/0x0/0x4ffc00000, data 0x328bbdf/0x3470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158564352 unmapped: 40558592 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 344 ms_handle_reset con 0x55dbe04f4800 session 0x55dbdde20d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 345 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbdec521e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 345 ms_handle_reset con 0x55dbe3245c00 session 0x55dbdde205a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 345 ms_handle_reset con 0x55dbe0542800 session 0x55dbe0cd8f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 345 ms_handle_reset con 0x55dbe4f9a800 session 0x55dbdde49e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 345 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbdec52960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158687232 unmapped: 40435712 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 345 heartbeat osd_stat(store_statfs(0x4f5a8a000/0x0/0x4ffc00000, data 0x328d6fa/0x3472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 345 ms_handle_reset con 0x55dbe04f4800 session 0x55dbdeb46f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 345 ms_handle_reset con 0x55dbe3245400 session 0x55dbde87d4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 345 ms_handle_reset con 0x55dbe3245c00 session 0x55dbdde485a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 345 ms_handle_reset con 0x55dbe0542800 session 0x55dbddedbe00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159023104 unmapped: 40099840 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 345 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbddc13e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 346 ms_handle_reset con 0x55dbe3245400 session 0x55dbdeffe5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 346 ms_handle_reset con 0x55dbe06cb800 session 0x55dbddc28000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 40017920 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 346 ms_handle_reset con 0x55dbdf0c8000 session 0x55dbdde44780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 346 heartbeat osd_stat(store_statfs(0x4f5a63000/0x0/0x4ffc00000, data 0x32b3287/0x349a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2607037 data_alloc: 234881024 data_used: 20639744
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 347 ms_handle_reset con 0x55dbe4f9a800 session 0x55dbe0d58f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 347 ms_handle_reset con 0x55dbe3245c00 session 0x55dbde65f680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158932992 unmapped: 40189952 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 347 ms_handle_reset con 0x55dbe0542800 session 0x55dbde879e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 347 handle_osd_map epochs [347,348], i have 347, src has [1,348]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 348 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbde6c3860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158949376 unmapped: 40173568 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158949376 unmapped: 40173568 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 348 heartbeat osd_stat(store_statfs(0x4f5a5a000/0x0/0x4ffc00000, data 0x32b69c5/0x349f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.515161514s of 11.029880524s, submitted: 139
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 348 ms_handle_reset con 0x55dbe3245400 session 0x55dbe010ba40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 348 heartbeat osd_stat(store_statfs(0x4f5a5a000/0x0/0x4ffc00000, data 0x32b69c5/0x349f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158949376 unmapped: 40173568 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 349 ms_handle_reset con 0x55dbe0542800 session 0x55dbe0108b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 349 ms_handle_reset con 0x55dbe3245c00 session 0x55dbdeb243c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158965760 unmapped: 40157184 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2624034 data_alloc: 234881024 data_used: 20692992
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 349 handle_osd_map epochs [349,350], i have 349, src has [1,350]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 349 handle_osd_map epochs [350,350], i have 350, src has [1,350]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 350 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe0751680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 350 ms_handle_reset con 0x55dbe06cb800 session 0x55dbe0d8eb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 158998528 unmapped: 40124416 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 350 handle_osd_map epochs [351,351], i have 351, src has [1,351]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 351 ms_handle_reset con 0x55dbe4f9a800 session 0x55dbe0d325a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 351 heartbeat osd_stat(store_statfs(0x4f5a56000/0x0/0x4ffc00000, data 0x32ba169/0x34a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 351 heartbeat osd_stat(store_statfs(0x4f5a52000/0x0/0x4ffc00000, data 0x32bbd1e/0x34aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 40017920 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 351 ms_handle_reset con 0x55dbe06cb800 session 0x55dbdeb63e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 352 ms_handle_reset con 0x55dbe0542800 session 0x55dbe072dc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 352 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbdde452c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 352 ms_handle_reset con 0x55dbe3245c00 session 0x55dbdde45a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159375360 unmapped: 39747584 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 352 ms_handle_reset con 0x55dbdf0c8c00 session 0x55dbddc13680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159399936 unmapped: 39723008 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 352 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbddc9a3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159399936 unmapped: 39723008 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2631941 data_alloc: 234881024 data_used: 20692992
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 353 ms_handle_reset con 0x55dbe0542800 session 0x55dbdec525a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159432704 unmapped: 39690240 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 353 handle_osd_map epochs [353,354], i have 353, src has [1,354]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 354 ms_handle_reset con 0x55dbe3245c00 session 0x55dbe0750d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 354 heartbeat osd_stat(store_statfs(0x4f54c3000/0x0/0x4ffc00000, data 0x3849083/0x3a3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164536320 unmapped: 34586624 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 355 ms_handle_reset con 0x55dbe3221c00 session 0x55dbe010bc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 355 ms_handle_reset con 0x55dbe3221400 session 0x55dbdddda000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164659200 unmapped: 34463744 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.725680351s of 10.187831879s, submitted: 150
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 356 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe010a780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 356 ms_handle_reset con 0x55dbe0542800 session 0x55dbddeda1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 356 ms_handle_reset con 0x55dbe3221c00 session 0x55dbdec52960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 356 ms_handle_reset con 0x55dbe06cb800 session 0x55dbdfe6c960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164798464 unmapped: 34324480 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 356 ms_handle_reset con 0x55dbe3245c00 session 0x55dbdd930960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 356 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe0d59680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 356 heartbeat osd_stat(store_statfs(0x4f54bc000/0x0/0x4ffc00000, data 0x384cecc/0x3a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 356 heartbeat osd_stat(store_statfs(0x4f54bc000/0x0/0x4ffc00000, data 0x384cecc/0x3a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161710080 unmapped: 37412864 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2705210 data_alloc: 234881024 data_used: 22421504
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 357 handle_osd_map epochs [357,357], i have 357, src has [1,357]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 357 ms_handle_reset con 0x55dbe06cb800 session 0x55dbe0d8e000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 357 ms_handle_reset con 0x55dbe0542800 session 0x55dbe0cd9860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 357 ms_handle_reset con 0x55dbe3221800 session 0x55dbdfe5e780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161783808 unmapped: 37339136 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 358 ms_handle_reset con 0x55dbe3221c00 session 0x55dbe0cd8d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 358 ms_handle_reset con 0x55dbe3220800 session 0x55dbdeb24b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 358 heartbeat osd_stat(store_statfs(0x4f54b9000/0x0/0x4ffc00000, data 0x384ea3b/0x3a44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161800192 unmapped: 37322752 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 358 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe0d59860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164061184 unmapped: 35061760 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 358 ms_handle_reset con 0x55dbe0542800 session 0x55dbdeb24000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 358 heartbeat osd_stat(store_statfs(0x4f53e4000/0x0/0x4ffc00000, data 0x392361c/0x3b1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164061184 unmapped: 35061760 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 359 ms_handle_reset con 0x55dbe3221800 session 0x55dbe0d59680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 35028992 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2727121 data_alloc: 234881024 data_used: 22433792
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 360 ms_handle_reset con 0x55dbe3220400 session 0x55dbe0d8eb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 360 ms_handle_reset con 0x55dbe06cb800 session 0x55dbe010af00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 162553856 unmapped: 36569088 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163635200 unmapped: 35487744 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 362 ms_handle_reset con 0x55dbe3220400 session 0x55dbddc130e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 362 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe0cd81e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163651584 unmapped: 35471360 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.515392303s of 10.152148247s, submitted: 147
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 362 ms_handle_reset con 0x55dbe0542800 session 0x55dbdeb62f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163667968 unmapped: 35454976 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 362 heartbeat osd_stat(store_statfs(0x4f53c8000/0x0/0x4ffc00000, data 0x3b388b7/0x3b36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 362 ms_handle_reset con 0x55dbe3220800 session 0x55dbdde20b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163667968 unmapped: 35454976 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2759889 data_alloc: 234881024 data_used: 22441984
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 362 heartbeat osd_stat(store_statfs(0x4f53c7000/0x0/0x4ffc00000, data 0x3b38919/0x3b37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163667968 unmapped: 35454976 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 podman[311757]: 2025-10-11 04:10:09.861184277 +0000 UTC m=+0.233317934 container start 403ccab3c2834bc4ead6bf83bde531539fa7fd7815de723601975f1045d6a5c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_archimedes, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 362 ms_handle_reset con 0x55dbe06cb800 session 0x55dbdeffe3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 363 ms_handle_reset con 0x55dbe3220400 session 0x55dbde806d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 363 ms_handle_reset con 0x55dbe3221800 session 0x55dbdde481e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 363 ms_handle_reset con 0x55dbe0542800 session 0x55dbdeb243c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163676160 unmapped: 35446784 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 364 ms_handle_reset con 0x55dbdc4dc800 session 0x55dbe0daf0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163700736 unmapped: 35422208 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 364 ms_handle_reset con 0x55dbe0542800 session 0x55dbde879860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 365 ms_handle_reset con 0x55dbe3220000 session 0x55dbdde445a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 365 heartbeat osd_stat(store_statfs(0x4f53bf000/0x0/0x4ffc00000, data 0x3b3c04b/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163717120 unmapped: 35405824 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 365 ms_handle_reset con 0x55dbe06cb800 session 0x55dbdfe5e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163717120 unmapped: 35405824 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2768778 data_alloc: 234881024 data_used: 22454272
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 365 ms_handle_reset con 0x55dbe3221800 session 0x55dbdfe610e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 366 ms_handle_reset con 0x55dbdc4e5c00 session 0x55dbe0c86780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163774464 unmapped: 35348480 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 366 ms_handle_reset con 0x55dbdc4e5c00 session 0x55dbdde45e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 367 ms_handle_reset con 0x55dbe00a7400 session 0x55dbdde48b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 367 ms_handle_reset con 0x55dbe3220400 session 0x55dbde879a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163774464 unmapped: 35348480 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 368 ms_handle_reset con 0x55dbe06cb800 session 0x55dbdec52780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163790848 unmapped: 35332096 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 368 ms_handle_reset con 0x55dbe0542800 session 0x55dbddc12000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 368 ms_handle_reset con 0x55dbe3220000 session 0x55dbe07df2c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 368 ms_handle_reset con 0x55dbdc4e5c00 session 0x55dbe0c874a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163807232 unmapped: 35315712 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.647465706s of 11.160874367s, submitted: 88
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 368 heartbeat osd_stat(store_statfs(0x4f53b7000/0x0/0x4ffc00000, data 0x3b428a3/0x3b46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163840000 unmapped: 35282944 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 369 ms_handle_reset con 0x55dbe0542800 session 0x55dbe0d33c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 369 ms_handle_reset con 0x55dbe00a7400 session 0x55dbdeb465a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2779117 data_alloc: 234881024 data_used: 22458368
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 369 heartbeat osd_stat(store_statfs(0x4f53b6000/0x0/0x4ffc00000, data 0x3b4444e/0x3b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163864576 unmapped: 35258368 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163880960 unmapped: 35241984 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 370 ms_handle_reset con 0x55dbe3221800 session 0x55dbdfe5e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 371 ms_handle_reset con 0x55dbe04f4800 session 0x55dbe0d585a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 371 ms_handle_reset con 0x55dbe06ca800 session 0x55dbdec52f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 371 ms_handle_reset con 0x55dbe06cb800 session 0x55dbe0d33c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163930112 unmapped: 35192832 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 372 ms_handle_reset con 0x55dbe3220000 session 0x55dbdfe610e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163962880 unmapped: 35160064 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 372 ms_handle_reset con 0x55dbdc4e5c00 session 0x55dbdde481e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 372 ms_handle_reset con 0x55dbe00a7400 session 0x55dbdeb62f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 373 ms_handle_reset con 0x55dbe0542800 session 0x55dbe0dae5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 373 ms_handle_reset con 0x55dbe3220400 session 0x55dbdde490e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163971072 unmapped: 35151872 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2790264 data_alloc: 234881024 data_used: 22515712
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 373 heartbeat osd_stat(store_statfs(0x4f53cc000/0x0/0x4ffc00000, data 0x3b27430/0x3b30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 163971072 unmapped: 35151872 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164003840 unmapped: 35119104 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 374 ms_handle_reset con 0x55dbe06ca800 session 0x55dbe01085a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 375 ms_handle_reset con 0x55dbe04f4800 session 0x55dbdfe6c960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164028416 unmapped: 35094528 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 375 heartbeat osd_stat(store_statfs(0x4f53c6000/0x0/0x4ffc00000, data 0x3b2aacc/0x3b35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164061184 unmapped: 35061760 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 375 ms_handle_reset con 0x55dbe06cb800 session 0x55dbe0cd8780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 375 ms_handle_reset con 0x55dbe00a7400 session 0x55dbde6c2d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164061184 unmapped: 35061760 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2742562 data_alloc: 234881024 data_used: 22302720
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.259029388s of 11.386545181s, submitted: 212
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164061184 unmapped: 35061760 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 375 ms_handle_reset con 0x55dbe0542800 session 0x55dbe0d330e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 376 ms_handle_reset con 0x55dbe3220400 session 0x55dbe06c8000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164093952 unmapped: 35028992 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 377 heartbeat osd_stat(store_statfs(0x4f5a22000/0x0/0x4ffc00000, data 0x34d04cd/0x34db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 377 ms_handle_reset con 0x55dbe3220000 session 0x55dbdde494a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 377 heartbeat osd_stat(store_statfs(0x4f5a22000/0x0/0x4ffc00000, data 0x34d04cd/0x34db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 35012608 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 377 ms_handle_reset con 0x55dbe0542800 session 0x55dbe010b860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 35004416 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 377 heartbeat osd_stat(store_statfs(0x4f5a2d000/0x0/0x4ffc00000, data 0x32c5244/0x34d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 377 ms_handle_reset con 0x55dbe3247c00 session 0x55dbde7b30e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 377 handle_osd_map epochs [377,378], i have 377, src has [1,378]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164126720 unmapped: 34996224 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2743019 data_alloc: 234881024 data_used: 22302720
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 378 ms_handle_reset con 0x55dbe3221800 session 0x55dbdd930000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 378 ms_handle_reset con 0x55dbe06ca800 session 0x55dbe06c90e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164134912 unmapped: 34988032 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 378 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 380 heartbeat osd_stat(store_statfs(0x4f5a2a000/0x0/0x4ffc00000, data 0x32c6e5b/0x34d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 380 ms_handle_reset con 0x55dbe3220400 session 0x55dbddc9a1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 380 ms_handle_reset con 0x55dbe06cb800 session 0x55dbddff2960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 380 ms_handle_reset con 0x55dbe00a7400 session 0x55dbde7b2b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164167680 unmapped: 34955264 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 380 heartbeat osd_stat(store_statfs(0x4f5a22000/0x0/0x4ffc00000, data 0x32ca488/0x34d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164175872 unmapped: 34947072 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164200448 unmapped: 34922496 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164208640 unmapped: 34914304 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2755527 data_alloc: 234881024 data_used: 22315008
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 34897920 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.077179909s of 10.141770363s, submitted: 182
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 382 ms_handle_reset con 0x55dbe3220400 session 0x55dbdec525a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 382 ms_handle_reset con 0x55dbe0542800 session 0x55dbde879e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 34897920 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 382 heartbeat osd_stat(store_statfs(0x4f5a1d000/0x0/0x4ffc00000, data 0x32cdc3a/0x34df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 382 heartbeat osd_stat(store_statfs(0x4f5a1f000/0x0/0x4ffc00000, data 0x32cdc3a/0x34df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164225024 unmapped: 34897920 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164233216 unmapped: 34889728 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 382 ms_handle_reset con 0x55dbe06ca800 session 0x55dbe0cd8f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 382 heartbeat osd_stat(store_statfs(0x4f5a20000/0x0/0x4ffc00000, data 0x32cdc2a/0x34de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 164257792 unmapped: 34865152 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2648249 data_alloc: 234881024 data_used: 16490496
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160063488 unmapped: 39059456 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 383 heartbeat osd_stat(store_statfs(0x4f68d4000/0x0/0x4ffc00000, data 0x241781b/0x2629000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 383 handle_osd_map epochs [384,384], i have 384, src has [1,384]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 384 ms_handle_reset con 0x55dbe3221800 session 0x55dbe0750d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160063488 unmapped: 39059456 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 385 ms_handle_reset con 0x55dbe00a7400 session 0x55dbddff34a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160063488 unmapped: 39059456 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 385 ms_handle_reset con 0x55dbe0542800 session 0x55dbdde20f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160088064 unmapped: 39034880 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160088064 unmapped: 39034880 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2580669 data_alloc: 234881024 data_used: 13172736
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159547392 unmapped: 39575552 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.559657097s of 10.181400299s, submitted: 90
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 386 heartbeat osd_stat(store_statfs(0x4f68ce000/0x0/0x4ffc00000, data 0x241c97e/0x262f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 386 ms_handle_reset con 0x55dbe06cb800 session 0x55dbddeda1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 387 heartbeat osd_stat(store_statfs(0x4f68ce000/0x0/0x4ffc00000, data 0x241c97e/0x262f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159547392 unmapped: 39575552 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159547392 unmapped: 39575552 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 387 ms_handle_reset con 0x55dbe3220400 session 0x55dbddc13e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159547392 unmapped: 39575552 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 387 heartbeat osd_stat(store_statfs(0x4f68cd000/0x0/0x4ffc00000, data 0x241e3d3/0x2631000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159547392 unmapped: 39575552 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2581147 data_alloc: 234881024 data_used: 13168640
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159547392 unmapped: 39575552 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 388 heartbeat osd_stat(store_statfs(0x4f68cd000/0x0/0x4ffc00000, data 0x241e3d3/0x2631000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159547392 unmapped: 39575552 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159547392 unmapped: 39575552 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159547392 unmapped: 39575552 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 388 ms_handle_reset con 0x55dbe00a7400 session 0x55dbde87dc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159547392 unmapped: 39575552 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 388 ms_handle_reset con 0x55dbe0542800 session 0x55dbe0484960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2587983 data_alloc: 234881024 data_used: 13176832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 159547392 unmapped: 39575552 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 388 heartbeat osd_stat(store_statfs(0x4f68c9000/0x0/0x4ffc00000, data 0x241fe46/0x2635000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.566102982s of 10.153310776s, submitted: 51
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 389 ms_handle_reset con 0x55dbe06cb800 session 0x55dbe0d8f860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160612352 unmapped: 38510592 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 390 ms_handle_reset con 0x55dbe3221800 session 0x55dbe07df0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160620544 unmapped: 38502400 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 390 ms_handle_reset con 0x55dbe3247c00 session 0x55dbe0daf0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 390 ms_handle_reset con 0x55dbe0542800 session 0x55dbde800780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 390 ms_handle_reset con 0x55dbe06cb800 session 0x55dbde876b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160620544 unmapped: 38502400 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 390 ms_handle_reset con 0x55dbe3221800 session 0x55dbdde492c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 ms_handle_reset con 0x55dbe00a7400 session 0x55dbde806d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 ms_handle_reset con 0x55dbdc4e3000 session 0x55dbddd1a000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 ms_handle_reset con 0x55dbe00a7400 session 0x55dbde879a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160628736 unmapped: 38494208 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2602086 data_alloc: 234881024 data_used: 13189120
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 ms_handle_reset con 0x55dbe0542800 session 0x55dbddeda1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 ms_handle_reset con 0x55dbe06cb800 session 0x55dbdde20f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160636928 unmapped: 38486016 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 ms_handle_reset con 0x55dbe3221800 session 0x55dbe0750d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 ms_handle_reset con 0x55dbdff22c00 session 0x55dbdec525a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 ms_handle_reset con 0x55dbdff22c00 session 0x55dbdfe6c960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 heartbeat osd_stat(store_statfs(0x4f68c0000/0x0/0x4ffc00000, data 0x2425111/0x263e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160645120 unmapped: 38477824 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 ms_handle_reset con 0x55dbe00a7400 session 0x55dbe01085a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 ms_handle_reset con 0x55dbe0542800 session 0x55dbe0dae5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160645120 unmapped: 38477824 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 heartbeat osd_stat(store_statfs(0x4f68c0000/0x0/0x4ffc00000, data 0x2425111/0x263e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 ms_handle_reset con 0x55dbe06cb800 session 0x55dbdeb62f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 ms_handle_reset con 0x55dbe3221800 session 0x55dbdde481e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160653312 unmapped: 38469632 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 392 ms_handle_reset con 0x55dbe00a7400 session 0x55dbdfe6c000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 392 ms_handle_reset con 0x55dbdff22c00 session 0x55dbdec52f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 392 ms_handle_reset con 0x55dbe0542800 session 0x55dbe0cd85a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160669696 unmapped: 38453248 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2606408 data_alloc: 234881024 data_used: 13205504
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f68ba000/0x0/0x4ffc00000, data 0x2426d6e/0x2643000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160669696 unmapped: 38453248 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 392 ms_handle_reset con 0x55dbe06cb800 session 0x55dbdfe5e1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 392 ms_handle_reset con 0x55dbdeb51000 session 0x55dbe010be00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.604132652s of 10.004495621s, submitted: 83
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 394 ms_handle_reset con 0x55dbddd45800 session 0x55dbe0cd9c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 394 ms_handle_reset con 0x55dbdff22c00 session 0x55dbdde494a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160694272 unmapped: 38428672 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 394 ms_handle_reset con 0x55dbe00a7400 session 0x55dbe0d325a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 394 ms_handle_reset con 0x55dbe0542800 session 0x55dbe06c90e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 394 heartbeat osd_stat(store_statfs(0x4f64a5000/0x0/0x4ffc00000, data 0x242a35c/0x2647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160702464 unmapped: 38420480 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 394 ms_handle_reset con 0x55dbe06cb800 session 0x55dbddff3a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 394 ms_handle_reset con 0x55dbe0546400 session 0x55dbddff34a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 395 ms_handle_reset con 0x55dbddd45800 session 0x55dbdde452c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160702464 unmapped: 38420480 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 395 ms_handle_reset con 0x55dbdff22c00 session 0x55dbe04854a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160702464 unmapped: 38420480 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 395 ms_handle_reset con 0x55dbe00a7400 session 0x55dbde87d860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2614139 data_alloc: 234881024 data_used: 13213696
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 396 ms_handle_reset con 0x55dbe0542800 session 0x55dbde879e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160710656 unmapped: 38412288 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 397 ms_handle_reset con 0x55dbe0542800 session 0x55dbdde483c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160727040 unmapped: 38395904 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 398 ms_handle_reset con 0x55dbddd45800 session 0x55dbde8005a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f649e000/0x0/0x4ffc00000, data 0x242f565/0x2650000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160743424 unmapped: 38379520 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 398 ms_handle_reset con 0x55dbdff22c00 session 0x55dbe0cd9e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 398 ms_handle_reset con 0x55dbe00a7400 session 0x55dbddd1b0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160751616 unmapped: 38371328 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f649b000/0x0/0x4ffc00000, data 0x24310e4/0x2652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 398 ms_handle_reset con 0x55dbe0546400 session 0x55dbe010ba40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160759808 unmapped: 38363136 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2625432 data_alloc: 234881024 data_used: 13234176
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160759808 unmapped: 38363136 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f649b000/0x0/0x4ffc00000, data 0x24310e4/0x2652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.449024200s of 10.176411629s, submitted: 95
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 399 ms_handle_reset con 0x55dbddd45800 session 0x55dbddddb680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160784384 unmapped: 38338560 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 399 ms_handle_reset con 0x55dbdff22c00 session 0x55dbe0d18000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160800768 unmapped: 38322176 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 400 ms_handle_reset con 0x55dbe00a7400 session 0x55dbe0109680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160817152 unmapped: 38305792 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160825344 unmapped: 38297600 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2632203 data_alloc: 234881024 data_used: 13254656
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 400 ms_handle_reset con 0x55dbe0542800 session 0x55dbe072c1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160833536 unmapped: 38289408 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 400 heartbeat osd_stat(store_statfs(0x4f6495000/0x0/0x4ffc00000, data 0x243488a/0x2658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 400 ms_handle_reset con 0x55dbe27f9800 session 0x55dbe0109860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160833536 unmapped: 38289408 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 400 ms_handle_reset con 0x55dbdc4e2000 session 0x55dbe0751680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160841728 unmapped: 38281216 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160841728 unmapped: 38281216 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 400 ms_handle_reset con 0x55dbdff22c00 session 0x55dbdeb245a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 400 ms_handle_reset con 0x55dbe00a7400 session 0x55dbddedbc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 400 heartbeat osd_stat(store_statfs(0x4f6493000/0x0/0x4ffc00000, data 0x243490b/0x265b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160841728 unmapped: 38281216 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2645564 data_alloc: 234881024 data_used: 13275136
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 401 ms_handle_reset con 0x55dbdc4e1800 session 0x55dbe07503c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 401 ms_handle_reset con 0x55dbe00b6000 session 0x55dbde801c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 401 ms_handle_reset con 0x55dbdc4e1800 session 0x55dbdde481e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160866304 unmapped: 38256640 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 402 ms_handle_reset con 0x55dbe0542800 session 0x55dbde8063c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 402 ms_handle_reset con 0x55dbddd45800 session 0x55dbdec53c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.626771927s of 10.001182556s, submitted: 95
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 403 ms_handle_reset con 0x55dbdc4e2000 session 0x55dbde876b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160890880 unmapped: 38232064 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 403 ms_handle_reset con 0x55dbdff22c00 session 0x55dbe0daf0e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160890880 unmapped: 38232064 heap: 199122944 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 403 ms_handle_reset con 0x55dbddd45800 session 0x55dbe0d594a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 404 ms_handle_reset con 0x55dbdc4e1800 session 0x55dbdfe60b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 160980992 unmapped: 84353024 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 404 ms_handle_reset con 0x55dbe0542800 session 0x55dbe072cb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 404 ms_handle_reset con 0x55dbe00a7400 session 0x55dbe0daeb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 161062912 unmapped: 84271104 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518030 data_alloc: 234881024 data_used: 13279232
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f04c6000/0x0/0x4ffc00000, data 0x943b815/0x9667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 167411712 unmapped: 77922304 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 404 ms_handle_reset con 0x55dbe3360000 session 0x55dbde877c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 404 heartbeat osd_stat(store_statfs(0x4ee8c7000/0x0/0x4ffc00000, data 0xb03b815/0xb267000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,2])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 404 heartbeat osd_stat(store_statfs(0x4ee8c7000/0x0/0x4ffc00000, data 0xb03b815/0xb267000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 172761088 unmapped: 72572928 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 172990464 unmapped: 72343552 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 169033728 unmapped: 76300288 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 404 heartbeat osd_stat(store_statfs(0x4e70c7000/0x0/0x4ffc00000, data 0x1283b815/0x12a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,1,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174628864 unmapped: 70705152 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 404 heartbeat osd_stat(store_statfs(0x4e3cc7000/0x0/0x4ffc00000, data 0x15c3b815/0x15e67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4880397 data_alloc: 234881024 data_used: 13279232
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 74440704 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.700782299s of 10.210826874s, submitted: 407
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 405 ms_handle_reset con 0x55dbdc4e2000 session 0x55dbe0cd9860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166871040 unmapped: 78462976 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166887424 unmapped: 78446592 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 406 ms_handle_reset con 0x55dbdc4e1800 session 0x55dbdfe5e5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166887424 unmapped: 78446592 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166887424 unmapped: 78446592 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5243353 data_alloc: 234881024 data_used: 13299712
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 406 heartbeat osd_stat(store_statfs(0x4e04c0000/0x0/0x4ffc00000, data 0x1943ee49/0x1966d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166887424 unmapped: 78446592 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 406 ms_handle_reset con 0x55dbe00a7400 session 0x55dbe0d58f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166912000 unmapped: 78422016 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 408 ms_handle_reset con 0x55dbe0542800 session 0x55dbe010a780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 408 heartbeat osd_stat(store_statfs(0x4e04bc000/0x0/0x4ffc00000, data 0x1944090e/0x19671000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166969344 unmapped: 78364672 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 409 ms_handle_reset con 0x55dbe3360000 session 0x55dbdec530e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 409 ms_handle_reset con 0x55dbddd45800 session 0x55dbde879c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166977536 unmapped: 78356480 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 166977536 unmapped: 78356480 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5256139 data_alloc: 234881024 data_used: 13316096
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 167002112 unmapped: 78331904 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 409 heartbeat osd_stat(store_statfs(0x4e04b5000/0x0/0x4ffc00000, data 0x1944406a/0x19678000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-mon[74326]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 11 00:10:09 np0005480824 ceph-mon[74326]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2824933420' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 410 ms_handle_reset con 0x55dbdc4e2000 session 0x55dbdeb25a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.800662994s of 10.092562675s, submitted: 99
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 167034880 unmapped: 78299136 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 411 ms_handle_reset con 0x55dbe00a7400 session 0x55dbe0c87c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 411 ms_handle_reset con 0x55dbdc4e1800 session 0x55dbde877c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 167034880 unmapped: 78299136 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 167034880 unmapped: 78299136 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 175644672 unmapped: 69689344 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5562578 data_alloc: 234881024 data_used: 13324288
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 411 heartbeat osd_stat(store_statfs(0x4dc8b0000/0x0/0x4ffc00000, data 0x1d047764/0x1d27e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171687936 unmapped: 73646080 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 172924928 unmapped: 72409088 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 411 heartbeat osd_stat(store_statfs(0x4db4b0000/0x0/0x4ffc00000, data 0x1e447764/0x1e67e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 169058304 unmapped: 76275712 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 411 heartbeat osd_stat(store_statfs(0x4d8cb0000/0x0/0x4ffc00000, data 0x20c47764/0x20e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 169279488 unmapped: 76054528 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 411 heartbeat osd_stat(store_statfs(0x4d74b0000/0x0/0x4ffc00000, data 0x22447764/0x2267e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 169697280 unmapped: 75636736 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6684738 data_alloc: 234881024 data_used: 13324288
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 411 heartbeat osd_stat(store_statfs(0x4d4cb0000/0x0/0x4ffc00000, data 0x24c47764/0x24e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170000384 unmapped: 75333632 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.758631706s of 10.366158485s, submitted: 67
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170098688 unmapped: 75235328 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 66691072 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174546944 unmapped: 70787072 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 411 heartbeat osd_stat(store_statfs(0x4d04b0000/0x0/0x4ffc00000, data 0x29447764/0x2967e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170663936 unmapped: 74670080 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7413986 data_alloc: 234881024 data_used: 13324288
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 411 ms_handle_reset con 0x55dbe0542800 session 0x55dbdec53c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170795008 unmapped: 74539008 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 412 ms_handle_reset con 0x55dbdc4e1800 session 0x55dbde8005a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 412 heartbeat osd_stat(store_statfs(0x4cd4b0000/0x0/0x4ffc00000, data 0x2c447764/0x2c67e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170819584 unmapped: 74514432 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 413 ms_handle_reset con 0x55dbdc4e2000 session 0x55dbde879e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170868736 unmapped: 74465280 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 413 heartbeat osd_stat(store_statfs(0x4cd4a8000/0x0/0x4ffc00000, data 0x2c44af06/0x2c684000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170868736 unmapped: 74465280 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 413 heartbeat osd_stat(store_statfs(0x4cd4a9000/0x0/0x4ffc00000, data 0x2c44aea4/0x2c683000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170868736 unmapped: 74465280 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7400363 data_alloc: 234881024 data_used: 13328384
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 413 ms_handle_reset con 0x55dbddd45800 session 0x55dbe0750780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 413 ms_handle_reset con 0x55dbe00a7400 session 0x55dbddc292c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170704896 unmapped: 74629120 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.698211193s of 10.010064125s, submitted: 77
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 414 ms_handle_reset con 0x55dbe00b1000 session 0x55dbdeb62000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170737664 unmapped: 74596352 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170737664 unmapped: 74596352 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 414 heartbeat osd_stat(store_statfs(0x4cd4a6000/0x0/0x4ffc00000, data 0x2c44c969/0x2c687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 414 ms_handle_reset con 0x55dbdc4e1800 session 0x55dbde65fc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170737664 unmapped: 74596352 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 414 heartbeat osd_stat(store_statfs(0x4cd4a6000/0x0/0x4ffc00000, data 0x2c44c969/0x2c687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 414 ms_handle_reset con 0x55dbdc4e2000 session 0x55dbddff2f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 414 ms_handle_reset con 0x55dbddd45800 session 0x55dbdfe603c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 414 ms_handle_reset con 0x55dbe00a7400 session 0x55dbe072c1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170967040 unmapped: 74366976 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7451914 data_alloc: 234881024 data_used: 13336576
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 414 heartbeat osd_stat(store_statfs(0x4cce83000/0x0/0x4ffc00000, data 0x2ca70969/0x2ccab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 170967040 unmapped: 74366976 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 414 ms_handle_reset con 0x55dbdfc34000 session 0x55dbddc13e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 414 ms_handle_reset con 0x55dbdc4e1800 session 0x55dbe0cd8d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171155456 unmapped: 74178560 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 414 ms_handle_reset con 0x55dbddd45800 session 0x55dbe0d32780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 415 ms_handle_reset con 0x55dbe00a7400 session 0x55dbdfe60960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171311104 unmapped: 74022912 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 415 ms_handle_reset con 0x55dbe0544400 session 0x55dbe0485e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 415 ms_handle_reset con 0x55dbe0528800 session 0x55dbe0c87e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 415 ms_handle_reset con 0x55dbdc4e1800 session 0x55dbdde48b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 415 ms_handle_reset con 0x55dbddd45800 session 0x55dbdfe610e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 415 ms_handle_reset con 0x55dbe00a7400 session 0x55dbe010be00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 415 ms_handle_reset con 0x55dbdff25800 session 0x55dbdde452c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbdc4e2000 session 0x55dbe010a780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171458560 unmapped: 73875456 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4cbb09000/0x0/0x4ffc00000, data 0x2dde8548/0x2e025000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171466752 unmapped: 73867264 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7612082 data_alloc: 234881024 data_used: 13352960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171466752 unmapped: 73867264 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbdc4e1800 session 0x55dbdfe5e780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4cbb05000/0x0/0x4ffc00000, data 0x2ddea0c5/0x2e028000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbdc4e2000 session 0x55dbe0dafa40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171270144 unmapped: 74063872 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 171270144 unmapped: 74063872 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.919175148s of 11.330858231s, submitted: 76
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbdff25800 session 0x55dbde87dc20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbe00a7400 session 0x55dbdddda000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 172130304 unmapped: 73203712 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbddd45800 session 0x55dbe0cd9c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 172138496 unmapped: 73195520 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7613589 data_alloc: 234881024 data_used: 13352960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 173146112 unmapped: 72187904 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4cbb04000/0x0/0x4ffc00000, data 0x2ddea137/0x2e02a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174604288 unmapped: 70729728 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4cbb04000/0x0/0x4ffc00000, data 0x2ddea137/0x2e02a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174604288 unmapped: 70729728 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4cbb04000/0x0/0x4ffc00000, data 0x2ddea137/0x2e02a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,1,3,2])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 63143936 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbdc4e2000 session 0x55dbddc9a960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbdff25800 session 0x55dbe0d8ed20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174907392 unmapped: 70426624 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4caea3000/0x0/0x4ffc00000, data 0x2ea4a199/0x2ec8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7767748 data_alloc: 234881024 data_used: 21618688
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174907392 unmapped: 70426624 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174915584 unmapped: 70418432 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174923776 unmapped: 70410240 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174923776 unmapped: 70410240 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4caea3000/0x0/0x4ffc00000, data 0x2ea4a199/0x2ec8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 174956544 unmapped: 70377472 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7767908 data_alloc: 234881024 data_used: 21622784
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbe3522800 session 0x55dbe0d19e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.622943878s of 12.813071251s, submitted: 40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 180027392 unmapped: 65306624 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbe053ec00 session 0x55dbe0cd9680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 188284928 unmapped: 57049088 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbe06ca000 session 0x55dbe072d4a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbe06ca000 session 0x55dbe06c85a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 187703296 unmapped: 57630720 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 187703296 unmapped: 57630720 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 187703296 unmapped: 57630720 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4c72ca000/0x0/0x4ffc00000, data 0x32623198/0x32864000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8279561 data_alloc: 251658240 data_used: 33488896
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 187703296 unmapped: 57630720 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4c72ca000/0x0/0x4ffc00000, data 0x32623198/0x32864000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 187703296 unmapped: 57630720 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 187711488 unmapped: 57622528 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 188030976 unmapped: 57303040 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 188030976 unmapped: 57303040 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8278681 data_alloc: 251658240 data_used: 33488896
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4c72a5000/0x0/0x4ffc00000, data 0x32648198/0x32889000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 188030976 unmapped: 57303040 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.657458305s of 10.709417343s, submitted: 143
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbdc4e2000 session 0x55dbdddc4000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 191291392 unmapped: 54042624 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbdff25800 session 0x55dbdd283e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 191389696 unmapped: 53944320 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbe053ec00 session 0x55dbde800780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbe3522800 session 0x55dbdec532c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 191602688 unmapped: 53731328 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 191602688 unmapped: 53731328 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8426427 data_alloc: 251658240 data_used: 33857536
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 191602688 unmapped: 53731328 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4c66fd000/0x0/0x4ffc00000, data 0x338121a8/0x33431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 191610880 unmapped: 53723136 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4c66fd000/0x0/0x4ffc00000, data 0x338121a8/0x33431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 191610880 unmapped: 53723136 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4c66fd000/0x0/0x4ffc00000, data 0x338121a8/0x33431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 204546048 unmapped: 40787968 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 205398016 unmapped: 39936000 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8531939 data_alloc: 268435456 data_used: 49041408
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4c66fa000/0x0/0x4ffc00000, data 0x338151a8/0x33434000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 205414400 unmapped: 39919616 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.902812958s of 10.286791801s, submitted: 102
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 205455360 unmapped: 39878656 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 205455360 unmapped: 39878656 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4c66f7000/0x0/0x4ffc00000, data 0x338181a8/0x33437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 205488128 unmapped: 39845888 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 205512704 unmapped: 39821312 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 ms_handle_reset con 0x55dbdff25800 session 0x55dbe01081e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8533413 data_alloc: 268435456 data_used: 49041408
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 205512704 unmapped: 39821312 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 205512704 unmapped: 39821312 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 39813120 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 heartbeat osd_stat(store_statfs(0x4c66f6000/0x0/0x4ffc00000, data 0x3381820a/0x33438000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 205553664 unmapped: 39780352 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 205553664 unmapped: 39780352 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8533673 data_alloc: 268435456 data_used: 49041408
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 205717504 unmapped: 39616512 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.917717934s of 10.040273666s, submitted: 35
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 214253568 unmapped: 31080448 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 416 handle_osd_map epochs [417,417], i have 417, src has [1,417]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 417 ms_handle_reset con 0x55dbe06ca000 session 0x55dbde7b2b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 219676672 unmapped: 25657344 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 417 heartbeat osd_stat(store_statfs(0x4c4f89000/0x0/0x4ffc00000, data 0x33938d87/0x334d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 417 ms_handle_reset con 0x55dbe00a7400 session 0x55dbe0d590e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 417 ms_handle_reset con 0x55dbe0548000 session 0x55dbddc9af00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 220012544 unmapped: 25321472 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 418 ms_handle_reset con 0x55dbe00ba400 session 0x55dbde7b30e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 418 ms_handle_reset con 0x55dbe00c0400 session 0x55dbe0d583c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 39067648 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8459042 data_alloc: 251658240 data_used: 38944768
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 418 heartbeat osd_stat(store_statfs(0x4c5373000/0x0/0x4ffc00000, data 0x3374a8a2/0x332e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 207298560 unmapped: 38035456 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 419 heartbeat osd_stat(store_statfs(0x4c52f6000/0x0/0x4ffc00000, data 0x33b798a2/0x33694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 419 ms_handle_reset con 0x55dbdff25800 session 0x55dbe0dae960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 207429632 unmapped: 37904384 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 419 heartbeat osd_stat(store_statfs(0x4c52f2000/0x0/0x4ffc00000, data 0x33b7b41f/0x33697000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 207429632 unmapped: 37904384 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 419 ms_handle_reset con 0x55dbe053ec00 session 0x55dbdfe603c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 207429632 unmapped: 37904384 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 419 ms_handle_reset con 0x55dbe3522800 session 0x55dbdec53680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 419 ms_handle_reset con 0x55dbdc4e2000 session 0x55dbdde48780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 420 ms_handle_reset con 0x55dbe053ec00 session 0x55dbdeb62000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 420 ms_handle_reset con 0x55dbe00c0400 session 0x55dbde877a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 208920576 unmapped: 36413440 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 420 ms_handle_reset con 0x55dbdff25800 session 0x55dbde879e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 420 ms_handle_reset con 0x55dbe3522800 session 0x55dbddc9a5a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8597771 data_alloc: 251658240 data_used: 38871040
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 25K writes, 94K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 25K writes, 8865 syncs, 2.85 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 40K keys, 12K commit groups, 1.0 writes per commit group, ingest: 32.76 MB, 0.05 MB/s#012Interval WAL: 12K writes, 5021 syncs, 2.44 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 421 ms_handle_reset con 0x55dbe0548000 session 0x55dbdfe61680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 421 ms_handle_reset con 0x55dbe00a7400 session 0x55dbde877c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 209190912 unmapped: 36143104 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 209190912 unmapped: 36143104 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 209190912 unmapped: 36143104 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 421 heartbeat osd_stat(store_statfs(0x4c4d69000/0x0/0x4ffc00000, data 0x3448cb09/0x33814000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.454102516s of 11.477991104s, submitted: 276
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 209231872 unmapped: 36102144 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 209231872 unmapped: 36102144 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8606583 data_alloc: 251658240 data_used: 39452672
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 422 ms_handle_reset con 0x55dbdff25800 session 0x55dbe0d8e960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 209231872 unmapped: 36102144 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 423 ms_handle_reset con 0x55dbe00c0400 session 0x55dbe0daef00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 209264640 unmapped: 36069376 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 423 ms_handle_reset con 0x55dbe053ec00 session 0x55dbdfe63a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 423 ms_handle_reset con 0x55dbe06ca000 session 0x55dbe0d183c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 423 ms_handle_reset con 0x55dbe3522800 session 0x55dbe072c1e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 209272832 unmapped: 36061184 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 423 heartbeat osd_stat(store_statfs(0x4c5a51000/0x0/0x4ffc00000, data 0x3300b2bb/0x32b2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 209281024 unmapped: 36052992 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 424 ms_handle_reset con 0x55dbdff25800 session 0x55dbe0daf860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 424 ms_handle_reset con 0x55dbe00c0400 session 0x55dbe0d8fa40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 424 ms_handle_reset con 0x55dbe00a7400 session 0x55dbe0d194a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 209338368 unmapped: 35995648 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 424 heartbeat osd_stat(store_statfs(0x4c5a51000/0x0/0x4ffc00000, data 0x3300b2bb/0x32b2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8466193 data_alloc: 251658240 data_used: 39456768
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 213549056 unmapped: 31784960 heap: 245334016 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 77701120 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 heartbeat osd_stat(store_statfs(0x4c264a000/0x0/0x4ffc00000, data 0x3640e8b7/0x35f32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 213975040 unmapped: 73359360 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.446596146s of 10.016536713s, submitted: 93
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 218390528 unmapped: 68943872 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbe053ec00 session 0x55dbe010b860
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 218644480 unmapped: 68689920 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 9340447 data_alloc: 251658240 data_used: 39604224
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 heartbeat osd_stat(store_statfs(0x4bde46000/0x0/0x4ffc00000, data 0x3ac148b7/0x3a738000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 224223232 unmapped: 63111168 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbe00a7400 session 0x55dbde87cf00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbdff25800 session 0x55dbe06c9c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 212074496 unmapped: 75259904 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbe00c0400 session 0x55dbdfe60000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 220676096 unmapped: 66658304 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 heartbeat osd_stat(store_statfs(0x4b6e44000/0x0/0x4ffc00000, data 0x41c14929/0x4173a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 225337344 unmapped: 61997056 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbe052ac00 session 0x55dbddff2f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 217006080 unmapped: 70328320 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 10550925 data_alloc: 251658240 data_used: 39616512
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbddd47c00 session 0x55dbde8794a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 221487104 unmapped: 65847296 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbddd47c00 session 0x55dbe010af00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 213450752 unmapped: 73883648 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 heartbeat osd_stat(store_statfs(0x4af241000/0x0/0x4ffc00000, data 0x4981595c/0x4933d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbe052a000 session 0x55dbddeda3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbdffa2400 session 0x55dbe0d59680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 213688320 unmapped: 73646080 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbe00c0400 session 0x55dbe0d18000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.268426418s of 10.210691452s, submitted: 97
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbe3522800 session 0x55dbe0d32780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbe052ac00 session 0x55dbde870d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 213696512 unmapped: 73637888 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 heartbeat osd_stat(store_statfs(0x4ada3a000/0x0/0x4ffc00000, data 0x4b01b95c/0x4ab43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,3,3])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbdffa2400 session 0x55dbe0484d20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbddd47c00 session 0x55dbde876b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbe00c0400 session 0x55dbe0c863c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 210894848 unmapped: 76439552 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8583168 data_alloc: 251658240 data_used: 39682048
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbe052a000 session 0x55dbddfa81e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 heartbeat osd_stat(store_statfs(0x4c5a3c000/0x0/0x4ffc00000, data 0x3301b94c/0x32b42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 210944000 unmapped: 76390400 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 ms_handle_reset con 0x55dbddd47c00 session 0x55dbe0dae780
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 426 ms_handle_reset con 0x55dbe052a000 session 0x55dbe0d8eb40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 426 ms_handle_reset con 0x55dbdffa2400 session 0x55dbde8710e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 426 ms_handle_reset con 0x55dbe00c0400 session 0x55dbdde20f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 210944000 unmapped: 76390400 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 426 ms_handle_reset con 0x55dbe052ac00 session 0x55dbe0d33c20
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 426 ms_handle_reset con 0x55dbe052ac00 session 0x55dbddddb680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 426 ms_handle_reset con 0x55dbddd47c00 session 0x55dbde8005a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 210944000 unmapped: 76390400 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 426 heartbeat osd_stat(store_statfs(0x4c5a39000/0x0/0x4ffc00000, data 0x3301d4ab/0x32b43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 426 ms_handle_reset con 0x55dbe00c0400 session 0x55dbdeb24000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 427 ms_handle_reset con 0x55dbe052a000 session 0x55dbdfe6c960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 210944000 unmapped: 76390400 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 210944000 unmapped: 76390400 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 428 ms_handle_reset con 0x55dbe0529800 session 0x55dbe0d32f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 428 ms_handle_reset con 0x55dbdffa2400 session 0x55dbde6c21e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8566726 data_alloc: 251658240 data_used: 39710720
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 428 ms_handle_reset con 0x55dbe00c0400 session 0x55dbe0c865a0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 429 ms_handle_reset con 0x55dbddd47c00 session 0x55dbe01090e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 211992576 unmapped: 75341824 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 429 heartbeat osd_stat(store_statfs(0x4c69ab000/0x0/0x4ffc00000, data 0x318587f2/0x319a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 429 ms_handle_reset con 0x55dbe052a000 session 0x55dbdfe60b40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 429 heartbeat osd_stat(store_statfs(0x4c69ab000/0x0/0x4ffc00000, data 0x318587f2/0x319a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 211943424 unmapped: 75390976 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 430 ms_handle_reset con 0x55dbe052ac00 session 0x55dbe0d8e3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 430 ms_handle_reset con 0x55dbddd47c00 session 0x55dbde801e00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 212008960 unmapped: 75325440 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 431 ms_handle_reset con 0x55dbe00c0c00 session 0x55dbe0daf680
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.649013519s of 10.003338814s, submitted: 284
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 431 ms_handle_reset con 0x55dbdffa2400 session 0x55dbde876000
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 431 ms_handle_reset con 0x55dbe00c0400 session 0x55dbdfe621e0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 213114880 unmapped: 74219520 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 431 heartbeat osd_stat(store_statfs(0x4d7017000/0x0/0x4ffc00000, data 0x1e686f00/0x1e8d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 431 ms_handle_reset con 0x55dbdc4e1800 session 0x55dbdeb46f00
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 431 ms_handle_reset con 0x55dbe0544400 session 0x55dbdeb243c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 431 heartbeat osd_stat(store_statfs(0x4d7017000/0x0/0x4ffc00000, data 0x1e686f00/0x1e8d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 431 ms_handle_reset con 0x55dbdc4e1800 session 0x55dbdddc5a40
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 208797696 unmapped: 78536704 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: bluestore.MempoolThread(0x55dbdc587b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6156438 data_alloc: 251658240 data_used: 30769152
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 431 heartbeat osd_stat(store_statfs(0x4d8267000/0x0/0x4ffc00000, data 0x1d436f00/0x1d689000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 432 ms_handle_reset con 0x55dbddd47c00 session 0x55dbe010a3c0
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 208863232 unmapped: 78471168 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 432 heartbeat osd_stat(store_statfs(0x4d8265000/0x0/0x4ffc00000, data 0x1d438ae8/0x1d68a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 432 ms_handle_reset con 0x55dbdffa2400 session 0x55dbe07de960
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: prioritycache tune_memory target: 4294967296 mapped: 207847424 unmapped: 79486976 heap: 287334400 old mem: 2845415832 new mem: 2845415832
Oct 11 00:10:09 np0005480824 ceph-osd[89401]: osd.1 432 heartbeat osd_stat(store_statfs(0x4ee2f5000/0x0/0x4ffc00000, data 0x6438a77/0x6688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
